Sunday, November 28, 2010

From Deferred to Inferred, part uno

How’s things going with the A-Team? Well, still no mapper but I’m happy to introduce you these 3 men:
Writer, Brian
Our Hank Moody from California. Has many years of experience on several projects, studied Anthropology (very useful for background research), had a career as musician, and is currently writing a novel. Find out more at this blog:
brianlinvillewriter

Sound & Music, Zack
DJ Zack found some free hours to put in this project. From threatening music to flushing toilet sound effects. Already did the audio before for some (horror movies)!
Soundcloud

3D (Character) modeler, Julio
His surname and Spanish background makes him sound as a hot singer, but the man is actually stuffed with horrific rotten ideas + the skills to work them out. Mothers, watch your daughters when they come home with a Spaniard!


Now that there is a small team, we can focus on the next targets… Such as fishing, paintball, playing board games, and other cozy group events. Ow, and maybe we’ll make a second demo movie as well…

The couple of votes on the Poll show that there is an interest for showing some more technical details. Hmmm… not completely surprising with the Gamedev.com background. Now I won’t turn this blog into something like Humus3D or Nehe. No time, and there are far better resources on the internet for learning the theory or implementations. However, I can write more details about the upcoming techniques that are implemented in the engine of course.


To start with some adjustments on the Rendering Pipeline. As you may have read before, the engine currently uses Deferred Rendering. I’ll keep it that way, but with some tricks lent from another variant: “Inferred Rendering”. In case you already know what Deferred Rendering is, you can skip this post. Next week I’ll be writing about Inferred Rendering. Otherwise, here’s another quick course. I’ll try to write it in such a way that even a non-programmer may understand some bits. Ready? Set? Go.


==========================================================

Good old Forward Rendering:
Since Doom we all know that lighting the environment is an important, if not the most important way to enhance realism. Raytracers try to simulate this in an accurate way by firing a whole lot of photons through the scene to see which ones bounced into our eyes. Too bad this still isn’t fast enough for really practical usage (although it’s coming closer and closer), so lighting in a game works pretty different from reality. Though with shaders, we can come pretty far. In short:
For all entities that need to be rendered:
- Apply shader that belongs to this entity
- Update shader parameters (textures, material settings, light positions…)
- Draw entity geometry (a polygon, a box, a cathedral, whatever) with that shader.

The magic happens on the videocard GPU, inside that shader program. Basically a shader computes where to place the geometry vertices, and how to color it’s pixels. When doing traditional lighting, a basic shader generally looks like this:

lightVector = normalize(lightPosition.xyz - pixelPosition.xyz );
diffuseLight = saturate( dotProduct( pixelNormal, lightVector ) );
Output.pixelColor.rgb = texture.rgb * diffuseLight.rgb * lightDiffuseColor.rgb;

There are plenty other tricks you can add in that shader such as shadows, attenuation, ambient, or specular light. Anyway, as you can see you’ll need to know information about the light such as its position, color and eventually the falloff distance. So… that means you’ll need to know which lights affect the entity before rendering it… In complex scenes with dozens of lights, you’ll need to assign a list of lights to each surface / entity / wall or whatever you are trying to render. Then the finally result is computed as follow:
....pixelColor = ( light1 + light2 + ... + lightN ) * pixelColor + … other tricks

- Enable additive blending to sum up the light results
- Apply lighting shader
- For each entity
Pass entity parameters to shader such as texture
For each light that affects entity
Pass light parameters to shader (position, range, color, shadowMap, …)
Render entity geometry

This can be called “Forward Rendering”. Has been used countless times, but there are some serious drawbacks:
- Sorting out affected geometry
- Having to render the geometry multiple times
- Performance loss when overdrawing the same pixels

First of all, sorting out which light(s) affect, let’s say a wall, can be tricky. Especially when the lights move around or can be switched on and off. Still it is a necessarily, because rendering that wall with ALL lights enabled would be a huge waste of energy and probably kill the performance straight away as soon as you have 10+ lights. While surfaces are usually only affected by a few lights only (directly).

In the past I started with Forward Rendering. Each light would sort out it’s affected geometry by collecting the objects and map-geometry inside a certain sphere. With the help of an octree this could be done fairly fast. After that I would render the contents per light.



Another drawback is that we have to render the entities multiple times. If there are 6 lights shining on that wall, we’ll have to render it six times as well to sum up all the individual light results… Wait… shaders can do looping these days right? True, you can program a shader that does multiple lights in a single pass by walking through an array. BUT… you are still somewhat limited due the maximum count of registers, texture units, and so on. Not really a problem unless you have a huge amount of lights though. But what really kills the vibe is the fact that each entity, wall, or triangle could have a different set of lights affecting it. You could render per triangle eventually, but this will make things even worse. Whatever you do, always try to batch it. Splitting up the geometry into triangles is a bad, bad idea.

Third problem. Computers are stupid, and so if there is a wall behind another one, it will actually eventually draw 2 walls. However, the one on the back will be (partially) overwritten by pixels from the other wall in front. Fine, but all those nasty light calculations for the back-wall were a waste of time. Bob Ross paints in layers, but try to prevent that when doing a 3D game.


There is a fix for everything, so that’s how Deferred Rendering became popular since, let’s say 5 years. The 3 problems mentioned are pretty much fixed with this technique. Well, tell us grandpa!


Deferred Rendering / Lighting:
The Deferred pipeline is somewhat different from the good old Forward one. Before doing lighting, we first fill a set of buffers with information for each pixel that appears on the screen. Later on, we render the lights as simple (invisible) volumes such as spheres, cones or screen filling quads. These primitive shapes then look at those “info-buffers” to perform lighting.


Step 1, filling the buffers
As for filling those “info-buffers”… With “Render Targets” / FBO’s, you can draw in the background to a texture-buffer instead of the screen. In fact, you can render onto multiple targets at the same time. Current hardware can render to 4 or even 8 textures without having to render the geometry 4 times as well. Since textures usually can hold 4 components per pixel (red, green, blue, alpha) you could write out 4x4 = 16 info scalars. It’s up to you how you use them, but my engine does this:

There are actually more sets of buffers that hold motion-vectors, DoF / SSAO info, fog settings, and very important; depth.
Howto render to multiple targets?
- Enable drawing on a render target / FBO
- Enable MRT (Multiple Render Targets). The amount of buffers depends on your hardware.
- Attach 2,3 or more textures to the FBO to write on.
- Just render the geometry as usual, only once
- Inside the fragment shader, you can define multiple output colors. In CG for example:
Out float4 result_Albedo : COLOR0,
Out float4 result_Normal : COLOR1,



Sorry for this outdated shot again, but since I can't run the engine at this moment because of Pipeline changes, I wasn't able to make new shots of course. By the way, the buffer contents have changed since my last post about Defererd Rendering, as I’m adding support for new techniques. Anyhow, here is an idea what happens in the background.


Step 2, Rendering lights
That was step 1. Notice that you can use these buffers not only for (direct) lighting. Especially depth info and normals can be used for many other (post) effects. But we keep it to lighting for now. Instead of sorting out who did what and where, we purely focus on the lights themselves. In fact, we don’t render any geometry at all! That’s right, even with 5 billion lights you only have to render the geometry once in the first step. Not entirely true if you want translucent surfaces later on as well, but forget about that for now…

For each light, render it’s volume as a primitive shape. For example, If you have a pointlight with a range of 4 meters on coordinates {x3, y70, z-4}, then render a simple sphere with a radius of 4 meters on that spot. Eventually slightly bigger to prevent artifacts at the edges.
* Pointlights --> spheres
* Spotlights --> cones
* Directional lights --> cubes
* Huge lights(sun) --> Screen quad

Asides from the sun, you only render pixels at the place where the light can affect the geometry. Everything projected behind those shapes *could* be litten. In case you have rendered the geometry inside this buffer as well, you can tweak the depth-test so that the volumes only render at the places where it intersects the geometry.

Now you don’t actually see a sphere or cone in your scene. What these shapes do is highlighting the background. Since we have these (4) buffers with info, we can grab those pixels inside the shader with some projective texture mapping. Then from that point, lighting will be the same as in a Forward Renderer. You can also apply shadowMaps, or whatsoever.

< vertex shader >
Out.Pos = mul( modelViewProjectionMatrix, in.vertexPosition );

// Projection texture coordinates
projTexcoords.xyz = ( Out.Pos.xyz + Out.Pos.www)*0.5;
projTexcoords.w = Out.Pos.w;

< fragment shader >
backgroundAlbedo = tex2Dproj( albedoBuffer, projTexcoords.xyzw );
backgroundNormal = tex2Dproj( normalBuffer, projTexcoords.xyzw );
result = dot(backgroundNormal.xyz, lightVector ) * backgroundAlbedo;

Since every pixel of a light shape only gets rendered once, you also fixed that “overdrawing” problem. No matter how many walls there are behind each other, only the front pixel will be used.

Thus, no overdraw, only nessesary pixels getting drawn, no need to render geometry again for each light… One hell of a lighting performance boost! And probably it results to even simpler code as you can get rid of sorting mechanisms. You don’t have to know to know which triangles lightX affected, just draw its shape and the rasterizer will do the magic.



All in all, Deferred Rendering is a cleaner and faster way to do lighting. But as always, everything comes at a price…
- Does not work for translucent objects. I tell you why next time
- Filling the buffers takes some energy. Not really a problem on modern hardware though.
- Dealing with multiple ways of lighting (BRDF, Anisotropic, …) requires some extra tricks. Not impossible though. In my engine each pixel has a lighting technique index which is later on used to fetch the proper lighting characteristics from an atlas textyre in the lighting shader.


Does that “Inferred Rendering” fix all those things? Nah, not really. But I have other reasons to copy some of it’s tricks. But that will be next time, Gadget.

Sunday, November 21, 2010

Revenge of the nerds

What can I say? The movie was received very positive pretty much everywhere. Of course a few improvement tips here and there, but overall I couldn't whish for better reactions! Especially the "creepy", "goose bumps" and "peed my pants" is heart-warming! After all, it is a horror game so for me delivering goose bumps is like delivering pizza for a pizza courier. Core business. So... A BIG thanks! Really, feedback (including criticism) from you is the (red)diesel for projects like these. Ow, and while browsing around, I saw the project was even mentioned at an online game-magazine, http://www.thegamingvault.com. Pretty cool huh?

* Ow, and maybe funny to ask. Did you find the tiny “Easter Egg” in the movie? Tip: “I am a liar!”


But enough ego trippin’. This is just the beginning. I'll have to resume code-stamping, and trying to ensemble a team to lift this project to a next level. Speaking of which... Holy crap. One day you have nothing, wondering if someone will ever contact you at all. Next day the mailbox is filled. Making a movie and asking on gamedev wasn't a bad idea (so if you ever want to start up a project, you could follow this way).

I was kind of nervous before posting the movie and a "Please help me, I'm a little puppy with big brown eyes" post. I'm a naive Goofy, who can't say "No.". That's very sweet, but that attitude can bring you in serious problems when it comes to “doing business”. I knew IF people would start applying for the project, I would have to:
- Watch out for those who want to ride on your work, making profit of it or even stealing idea's
- Kindly reject help if it doesn't fit in the project
- Decide who's in and who's not

The whole game concept is a fragile thing, a Ming vase. As it depends largely on the story and a few not-seen-before game mechanics, I really have to watch it. If I would tell all my idea's, EA games or another big Guy who can produce games on the fly will have it tomorrow on the store shelves. Or the story just doesn't work anymore because anyone with interest for the project already knows the clue. Are you going to watch a movie if you already know the end? If it was Predator, Rambo or Robocop... YES of course. But this genre is different. You don't play Silent Hill for entertainment. For some it's even so fuck'n scary that you don't even want to play it. But the story has to be found out, it's your duty.

The beautiful thing of internet is that people come in touch easily. Just look at any blog or forum. Americans discuss with Iranians about games. Vikings share their jokes with South Africans, and an Asian gets compliments from a Marsian. But at the other hand... you don't really know who's on the other side. You all think I'm Rick from Holland, but in fact I am prince Mambassa al Bedala III from the African federation.



As for rejecting people that offer their help, that also hurts. You simply can't invite everyone for the party. But having to say "Sorry No" is difficult. I never want to hurt someone feelings by telling "you're not good enough". Certainly not when they are offering their precious time, for FREE! Don’t want to sound cocky. Nevertheless, Microsoft won't put everyone on the Windows 8 core code either, just out of compassion. Now Microsoft pays, while I can't. But that doesn't mean I wouldn't have serious plans.

From my own experience, amateur/hobby teams often die early because of taking too much ballast at once. The whole development has to go slick, like an oiled machine. But you can't expect to manage a 15-man army with different skills, personalities and working pace if you are busy yourself. I have work, extra work, girl + kid, friends and my own programming tasks on this project. That's why I would like to start with a small team. And if possible, keep it small. So basically, I have to make decisions. Our little boat only has a few seats!



So, what’s the status doc? I’m still talking with a couple of guys, and if everything goes well the project will be extended with:
- Story & biography writer(s) / Game concept document
- Sound & music composers
- 2D texture artist
- 3D (character) artist (for making the next ugly bastard in a demo movie)
We’re not complete yet, as I would like to have a (2D) concept artist, mapper and 3D asset modeler. But there is still a thread running on Polycount, so I have to fish patiently. Yeah, patiently… If I could do it over again, I would send each replier an automated “sign up” form generated by C3PO, asking for their portfolio and some other common questions. Then wait X weeks to sort all the forms out, and reply to the winning tickets… Ehm, sounds so impersonal. But I also hate to be like an “American Next top Model(er)” jury member. Difficult, difficult.

All in all, this is a whole new learning traject. Now that there are people kind enough to offer their (free!) services, I must grab the chance with both hands. The good thing is that I can focus more on the programming part again, and hopefully these guys will create far better work than I could do. But don’t forget you will also have to lead these people now. Don’t have to be a babysitter, but each individual wants to be taken seriously. Which means you’ll have to provide enough fun&interesting tasks, and giving constructive feedback (including the ability to be honest when the delivered work isn’t that good).

Am I ready for it? Well you can’t prepare yourself perfectly for everything in life. Just like becoming a daddy for the first time, you will never have that feeling “Think I’m ready to make one now”. Sometimes you have to stop thinking and just do it. Set sail mateys!

Saturday, November 13, 2010

Movie!

Grab a 5 liter cola, put on your 3D goggles, and get yourselves some ear protectors to survive the noisy sound. Yes, the quality is not superb (I didn’t try the DVD recording thing yet), but at least there is finally something to look at. Now the verdict is up to you, shoot me! But please don’t kick me in the balls ;)

Well you can’t expect too much from a half finished engine + some programmer art. But since this is supposed to be a horror game, I hope this little movie gives you a few sweaty thrills. Just a little bit. The problem with being the “producer” of such a thing is that you really don’t know anymore whether your product is good, bad, scary, or just plain stupid. After programming, playing, watching and thinking about it at least 7 million times, I really don’t have a clue what others may think. Yes of course I tested it first on a big television with my girlfriend, but she is also afraid of tiny spiders, thunders 200 kilometers away, me, cheese and everything else. So that’s not a real good reference.

For that matter, I’m quite relieved its finished now. Not only because it is a first milestone, also because I can finally move on to something else. I haven’t been really programming on bigger techniques last 5 months, such as AI or a new ambient lighting system.

Don't know if it matters, but it was encoded with H.264, MP4 format. In case you can't play it or something, please let me know. I have zero experience with online movies and stuff.

In the meanwhile, let’s see what this movie does. As I have told a few times before, one of the goals with this blog and demonstration movie is to hopefully get a few talented artists. So I can focus more on the technical part, while the creative guru’s are make dreams come true. Just for a start, a mapper/modeler and concept/texture artist can make life so much easier. I’d love to see what a next movie would do with the touch of some true artistic talent.

Maybe that is wishful thinking, but you’ll never know if you don’t give it a try. One way or another, if I ever want this project to grow to something more serious, I can’t do without them. The days of programming Pac-man on the attic are over.

In case you are reading this as a programmer; yes I’ll probably need a few experts on specific matters(physics, IK, sound for example) as well, but not yet. Maybe nothing will change in the next months, but if, IF one or two artists come in touch, I’ll first want to put my energy on that. From other amateur-hobby-projects I’ve learned not to choke yourself in your own enthusiasm! Be easy, don’t rush! First things first.


Finally, is there something to say about this movie? Well those who have read this blog the last months probably have seen most of the visuals and techniques already. However, I think I should add that some of the contents were used from other games:
- Halflife 2 some of the wall and floor textures, a few footstep sounds
- Doom 3 many of the sounds, a few blood decals
- Crysis Warhead Concrete texture

To be very honest, I have no idea whether this is illegal or not. Yet another reason to search for some artists quickly, so we can replace that content (that means it also needs a sound engineer in the future)! How about the TO DO’s? Allright, here’s the 2011 plan:
• Hopefully find one or two talented artists, and create a second movie
• AI module (improving navigation mesh, task system, …)
• Realtime Ambient 2.0
• Volumetric fog & lightshafts
• Adjusting the rendering pipeline to something like Inferred Lighting
• FMOD sound, maybe
• Switching over to Newton 2.0 (still using the older one)
• And tweak about anything else

Ok, I’m going to rest for a moment. All coding and no play makes Jack a dull boy.

Sunday, November 7, 2010

Leaked sex-tapes!

With that title, I'll probably have 10 times as much hits. But no, let's talk about recording games.

Well... what a drama. I should have know it. Anti-climax. Done with programming(almost), 3D models ready, everything works... except the recording part. There are quite some tools on the market that will capture video and audio while playing a game. FRAPS, KKapture, ZDSoft recorder, Wegame, xFire... and probably I forgot a dozen more. From what I read, FRAPS belongs to one of the best options, although it costs a few bucks ($37 dollar, fair enough). My first attempts to record something were awful though.

KKapture didn't work right from the start. Maybe because the game isn't actually full screen yet (embedded view in the Editor). Wegame works pretty well… except that the keyboard input is suddenly missing. Now that is kind of a problem. Then there was FRAPS. Three issues:

- Sound crackles. A bombarded world war I gramophone player sounds better.
- The file quickly becomes 4 gig... which will halt the record because my old FAT32 file system doesn't allow bigger (same issue as with the maximum 4 gigs of RAM on a 32 bit system). FRAPS doesn't compress much to safe speed and quality, so that means massive files.
- Performance... Scientists calculated that
"heavy duty game + recording = 300 KG lady rolling in sirup".
The framerate dropped to 18, maybe 20, at it’s best.

Now what? Record it with the handycam from screen? Maybe I'll do that if this game will ever reach the E3 or something. To create a hype with so-called “illegal recorded content!” :). Nah, that's not going to work. So, if you are in problems, then who are you going to call? Ghostbusters? No the A-team of course, Fool. Here some advice:

- If you are missing audio, make sure none of the related Windows volume bars are muted.
- Reduce the volume bar(s) used for recording. This can take away quite a lot crackles. Just prevent very loud noise.
- Stream the video to a fast hard-drive. If possible, another drive than you are using for the game, Windows, virtual memory, etc.
- 20 or 25 frames per second is enough to create a movie. It won't be HD or anything, but yet better something than nothing at all.
- If the 4 gig limit is a real problem, you may want to step over on a NTFS file system instead of FAT32 (if you are not already using it).
- And my true savior; run the game at a lower screen resolution. 1024 x 768 for example. Less pixels = smaller video files, less recording requirements, and maybe even a somewhat faster game, making it easier to control the whole thing. Someone still has to ride the camera right? Yes, the quality will be lower, then again pixels are smeared/blurred after compressing anyway.

With these tricks, I can let FRAPS record "good enough". Still a little bit jerky and low quality sound (especially for high-volume stuff), but it is somewhat acceptable. I'm no expert with this stuff, but tools like VirtualDub may help you to filter out some of the crap further afterwards.

Part of the movie is a proper flow of events of course. At 00:32 Jean Claude van Damme walks in, etcetera. The debug-text is to see which triggers went off. Ow, if you wonder who "Johny" is, I had to give that poor guy a name. Sounds more comfortable than "monster1" or "beast_on_hook"


A real good solution would be recording via an (external) device of course. Unfortunately my "studio", erm, little corner in the living room, doesn't have such professional hardware. Except for the 50 year old tape recorder I got from my father once. Mom and dad DO have a DVD recorder though. So I was thinking...

My videocard, GeForce 8800 GTS 640 MB, has a DVI output. The recorder, which normally picks its signal from a television, has SCART. Youtube & Google "dvi to scart" gives all kind of ideas to try out. Maybe I can directly connect the recorder with the videocard & soundcard output. Or otherwise indirectly via a (modern digital) TV. That is certainly worth a try. If it succeeds, I'll tell how I did it.

Now, let's hope I can put a movie here next week. It’s about time don’t you think? I’m already practicing the script (the route I’ll walk, and where to look at).

Monday, November 1, 2010

A long time ago


Hey, soft particles. Where did they come from?

Thanks to the profiles from the followers here, I noticed there are quite a few blogs that tell a fictional (game) story. Like a book, adding a new chapter each week(or month). Or like Halflife2, adding a new episode each... hundred years. Come on man, hurry up already. Anyway, how about this? Put on your pyjama’s, grab some hot chocolate milk, and ensemble around the fireplace. Grandpa is going to tell.

---------------------------------------------------------------------
Day 7, Sunday 05:01 AM
---------------------------------------------------------------------
Didn't forget to turn off the alarm, but I woke up anyway. Normally I would feel relieved and turn around with a little smile. “Just a mistake, it's weekend", that wonderful feeling. But I stood up and left bed. Sleeping here doesn't feel comfortable. There is no light in the bedroom, it's warm, moist, oppressive. Even after a week the whole apartment still feels strange. Not hostile, but certainly not like home either.

Weekend, Sunday, my day off. Maybe some relaxation will break the tension. Not that my new job has been hard on me though. Mopping some floors, painting a wall, replacing a light bulb, locking the hall doors at 23:00. Or delivering post. As a caretaker of this building, I make long days, but there is no pressure. So far I haven't seen a chief or boss anywhere. Only a few instruction letters and a couple of calls from the... boss. Don't know his name really.


Thinking about it, I haven't seen a single soul the entire week at all. No co-workers, no residents. No elevators going up or down. No one looking in his mailbox, no one going to work. Nobody. The apartment at the right from me seems to be uninhabited. And the apartment on the left... well there is none. There is no door where it's supposed to be.

Now maybe my floor just isn't really occupied. I haven't been at the lower floors yet. There should be a few shops and small restaurants there. Yet, strange enough I smell, see and even hear traces of life. No matter how many times I clean up, there is new litter every day. In my own hall, the "old woman" painting has been replaced with a windmill. And even my own mailbox gets filled by... some one. But more than that, I can hear them. Television shows behind locked doors, gramophone music, whistling kettles, creaking pipes, footsteps. Ironically, all the silence makes you hear the smallest details, giving a whole new world of noise.


Yes I could use some distraction before I start seeing things. A walk in the park maybe. If there is one nearby. Didn't have a proper look from my balcony yet, but I can't see the streets. Too dark and too much fog this autumn. To be honest, I have no idea how high I really am. The stairway shows a number '1' at my floor, but two floors lower it's number '12' again. All I can see from here are the upper-floors and rooftops of other apartment blocks. And come to think of it, I haven't seen much of living there either. As I write, I'm looking right now through the kitchen window. The only lights that shine in that building across are the cheap corridor chandeliers...
Then again, it's Sunday 05:12 right now.
------------------------------------------------------------------------------

Excuse me for the writing style, English is hard enough already, let alone writing a story. But it should give a few details about the game story / setting. Hope you like it.



See that? This cloud of crap is just a bunch of quad-sprites. Nothing fancy, however, see the difference? In the left picture the quads clearly intersect with the environment (floor, walls, ceiling). In the right one they still do, but a little trick hides this ugly artifact.

Now the technical portion for this week. Although this isn't really urgent at the moment, it has been added anyway; Soft Particles. All the cool kids in town already have them, Unreal5 has Soft Particles?! No way! See? Can't do without.

Luckily, implementing this technique is as easy as smearing peanut butter on your head. For a change, you won't need 6 different passes, new types of geometry data, or complicated shaders. The only thing you need is a depth-map. If you are using DoF, SSAO or other (post)effects, chances are very big you already have such a texture. If not, you should really start making one. A depthMap rendered from the viewer perspective is like potatoes or rice, a basic ingredient for many dishes.

How does it work? Just render your sprite as usual, but include the depthMap. At the end of the pixel shader, you compare the scene depth with the particle depth. If the difference is small, it means the particle pixel is about to intersect the environment. If so, fadeout that pixel with

< vertex shader >
Out.vertPos = mul( modelViewProjMatrix, in.vertPos );
Out.projCoords = (Out.vertPos.xyz + Out.vertPos.www) * 0.5f;


zScene= tex2Dproj( depthMap, in.projCoords );
float difference = saturate( (zParticle - zScene) * factor );
// The smaller the factor, the bigger the fadeout distance.

// Now the nVidia paper I read about this adds a little more juice with this // smoothing function:
float softF = 0.5f * pow(saturate(2*(( difference > 0.5) ? 1- difference: difference)), curveStrengthFactor );
softF = (difference > 0.5) ? 1-softF : softF;
result.alpha *= softF;

And that's pretty much it. Not a next-gen graphic blaster technique, then again it comes at a cheap price, and obvious intersecting particles is so 2006...


Movie? I need 3 more sounds, make one shader, and tweak the camera. But more difficult is getting the damn thing recorded. FRAPS is a little bit slow, and the recorded sound is even more crunchy than your grandma’s unshaved legs. You'll be updated soon.

America's funniest home videos