Monday, November 16, 2015

Basic principles of game-graphics, 2015

How does Engine22 bring pixels on your screen? How does a game in general draw its graphics? For me as an unofficial graphics-programmer, it all makes pretty much sense. But when other people ask about it –including programmers-, it seems to be a pretty mysterious area. Also, for those who didn’t touch “graphics” the last, say 10 years, a lot might have changed maybe?

Quite some years ago, an old friend without any deeper computer background thought I really programmed every pixel you could possibly see. Not in the sense of so-called shaders, but really plotting the colours of a monster-model on the screen, pixel-by-pixel, using code-lines only. Well, thank the Lord it doesn’t work like that exactly. But then, HOW does it work?

Have a Sprite

Graphics is a very complex subject, with multiple approaches and several layers. There is no single perfect way to draw something. Though most games use the same basic principles and help-libraries more or less. On a global level, we could divide computer-graphics into 2D and 3D to begin with. Although technically 3D techniques overlap 2D (you can draw Super Mario using a 3D engine – and many modern 2D games are actually semi-3D), old 2D games you saw on a nineties Nintendo, used sprite-based engines.

A sprite is basically a 2D image. Like the ones you can draw in Paint (or used to draw when Paint was still a good tool for pixel-artists, modern Paint is useless). In addition, sprites often have a transparent area. For example, all pink pixels would become invisible so you could see a background layer through the image. Also, sprites could be animated, by playing multiple images quick enough after each other. Obviously the more “frames”, the smoother the animation. But, and this is  typical for the old sprite-era, computer memory was like Brontosaurus brains. Very little. Thus small resolution sprites, just a few colours (typically 16 or 256), and just a few frames and/or little animations in general.
Goro, the 4-armed sprite dude from Mortal Kombat.

When we think about sprites, we usually think about Pac-Man, Street Fighter puppets or Donkey Kong throwing barrels. But also the environment was made of sprites. The reason why Super Mario is so… blocky, is because the world was simply a (2D) raster. Via a map-editor program, you could assign a value for each raster-cell. A cell was either unoccupied (passable), a brick-block, a question-mark block, or maybe water. And again, the background was made of a raster –but usually having larger cells. Later Mario’s would allow sloped (thus partially transparent) cells by the way.

So typically an old fashioned 2D “platform-game” engine gave us a few layers (sky, background, foreground you walk/jump on) for the environment, and (animated) sprites for our characters, bullet projectiles, explosions, or whatever it was. The engine would figure out which cells are currently visible on the screen, and then draw them cell-by-cell, sprite-by-sprite. In the right order; background sprites first, foreground sprites last. And of course, hardware of the Sega, Nintendo or PC provided special ways to do this as fast as possible, without flickering. Which is terribly slow and primitive for now, but pretty awesome back then.

Next station, 3D

2D worlds made out of flat images have one little problem; you can move and even zoom the camera, but you can’t rotate. There is no depth data or whatsoever.               

3D engines made in the last years of our beloved nineties, took a whole different approach (and I’m skipping SNES Mode7-graphics for Mario Kart or 2,5D engine like the ones used for Wolfenstein or Duke Nukem 3D). Whereas 2D “sprites” where the main resources to build a 2D game, artists now had to learn how to model 3D objects. You know, those wireframe things. To make a box in a 2D game, you would just draw a rectangle, store the bitmap, and load it back into your game engine. But now, we had to plot 8 corner coordinates called “vertices”, and connect them by “drawing” triangles. Paint-like programs got extended with (more complicated) 3D modelling programs, like Maya, Max, Lightwave, Milkshape, Blender, TrueSpace, et cetera.

A bit like drawing lines, but now in a 3D space. A (game) 3D model is made out of triangles. Like the name says, a flat surface with 3 corners. Why is that? Because (even on this day) we made hardware specialized in drawing these triangle-primitives. Polygons with 4 or more coordinates instead would also be possible in theory, but give a lot of complications, mainly mathematically wise. Anyway, Lara Croft is made out of many small connected triangles. Though 15 years ago, Lara wouldn’t have that much triangles, resulting in less rounded boobs. 

How the hell does an artist make so many tiny triangles, in such a way that it actually looks like a building, soldier or car? Sounds like an impossible job. Yeah, it is difficult. But fortunately those 3D modelling programs I just mentioned have a lot of special tools. There are even programs like Z-Brush that sort of “feel” (but then without the actual feel) like claying or sculpting. You have a massive blob made of millions of triangles (or actually voxels) and you can push, pull, cut, slice, stamp, split, et cetera. But nevertheless, 3D modelling is an art on its own. But, unlike my friend thought, 3D modelling is not a matter of coding thousands of lines that define a model. Thank God – though there is this exception of insane programmers who make "64k programs" that actually do everything code-wise. But I’ll spare you the details.

We didn’t ditch Paint (or probably Photoshop or Paint shop by then) though. A 3D wireframe model doesn’t have a texture yet. To give our 3D Mario block a yellow colour and a question-mark logo, we still need to put a 2D image on our 3D object. But how? In technical terms; “UV mapping”. To put it simple; it’s like wrapping (2D) paper around a box, putting a decal-sticker on a car, or tattooing “I miss you Mom” on your curvy arm. UV Mapping is the process of letting each vertex know where to grab from a 2D image.

3D techniques – Voxels

So far we explained the art-part; feeding a 3D engine with 3D models (a file with a huge array of coordinates) and 2D images we can “wrap” around them. But how about the technical, programming part? How do we draw that box on the screen?

Again, we can split paths here. Voxel engines, Raytracing and Rasterizing are the roads to Rome. The paved roads at least. I’ll be short about the first one. Voxelizing means we make the world out of tiny square… ehm… voxels? They are like square patches. If you render enough of them together, they can form a volumetric shape. Like a cloud. Or this terrain in the 1998 “Delta Force” game series:

The terrain makes me think about corn-flakes, though this "furry" look had a nice-side effect when it comes to grass simulation (something quite impossible with traditional techniques on a larger scale back then).

Although I think its technically not a Voxel-based engine, Minecraft also kinda reminds me of it; volumetric (3D) shapes getting simplified into squares or cubes. Obviously, the more voxels we use, the more natural shapes we get. Only downside is… we need freaking millions of them to avoid that ”furry carpet” look. Though Voxels are making their re-entrance for special (background) techniques, they never became a common standard really.

3D techniques – Raytracing / Photon Mapping

Raytracing, or variants like Photon mapping, are semi-photo realistic approaches. They follow the rules of light-physics, as Fresnel, Young, Einstein, Fraunhofer or God intended them to be. You see shit because light photons bounce off on shit and happen to reach your lucky eye. The reason shit looks like shit is because of its material structure. Slimy, brownish, smudgy – well anyway. Light photons launched by the sun or artificial sources like a lightbulb bounce their way into your eye (and don’t worry, they don’t actually carry shit molecules).

A lot of physical phenomena happen during this exciting journey. Places that are hard to reach because of an obstacle, will appear “in shade”, as less photons reach here. Though they often still manage to reach the place indirectly after a few bounces (and this is a very important aspect for realistic graphics btw). Every time a photon bounces, it either reflects or refracts (think about water or glass), plus it loses some energy. Stuff appears coloured because certain regions of the colour spectrum are lost. A red wall means it reflects the red portion of the photon, but absorbs  the other colours. White reflects “everything” (or at least in equal portions), black absorbs all or most of the energy. Dark = little energy bounced.

Well, I didn’t pay much attention during physics classes so I’m a bad teacher, but just remember that Raytracing tries to simulate this process as accurate as possible. There is only one little problem though… A real-life situation has an (almost) infinite number of photons that bounce around. Since graphics are a continuous process (we want to redraw the screen 30 or more times per second), it would mean we have to simulate billions of photons EACH cycle. Impossible. Not only the numbers are too big, also the actual math –and mainly testing if & where a photon collided with your world- is absolutely dazzling. If the world was rendered with a computer, it would one ultra-giga-mega-Godlike PC! We’re not even a little bit close.

BUT! Like magicians, we graphics-programmers are masters of fooling you with cheap hacks and other fakery. Frauds! That’s what we are. Raytracing doesn’t actually launch billions of photons. We do a reverse process; for each pixel on the screen (a resolution of 800 x 600 would give us 480.000 pixels to do), we try to figure out where it came from. Hence the name ray*tracing*. Still a big number (and actually still too slow to do it real-time with complex worlds), but a lot more manageable than billions. Though it’s incomplete… By tracing a ray, we know which object bounced it off to us. But where did it came from before that? We have to travel further to a potential lightsource… or multiples. And don’t forget yet another obstacle might be between that object and a lightsource, giving indirect light. You see, it quickly branches into millions and billions of possible paths. And all of that just to render shit. Shit.

Well, there you have the reason why games don’t use Raytracing or Photon mapping. And I was about to put “(yet)”, but it’s not even a “yet”. We’re underpowered. It might be there one day, but currently we have much smarter fake tricks that can do almost the same (- must say some engines may actually use raytracing for very specific cases to support special techniques - hybrids).

But it might be useful to mention how (older?) 3D movies were rendered. If you remember game-cinematics like those pretty-cool-ugly movies I mentioned in my previous ”Red Alert” review, you may have noticed the “gritty-spray” look. Now first of all, movies are different than games, as they are NOT real-time. Games have to refresh graphics 30 or more times per second to stay fluent. Movies also have a high framerate, but we can render these frames “offline”. It doesn’t matter if it takes 1 second, 1 hour, or 1 week to draw a single frame. If you have two production years, you have plenty of rendering-time. And of course, studio’s like Pixar have what they call “Render-Farms”. Many computers, each doing a single frame or even just a small portion of a single frame. All those separated image-results are put together in the end, just like in the old days where handmade drawings of Bambi were put in line.

Toy Story must have been one of the first (if not first) successful, fully computer-animated movies.

So that allows us to sit back, relax, and actually launch a billion photons. Well… sort of. Of course Westwood didn’t have years and thousands of computers for their Red Alert movies, nor were the computers any good back then. So, reduce “billions” to “millions” or something. It’s never enough really, but the more photons we can launch, the better results. Due limitations or time constraints, especially older (game) movies appear “under-sampled”, giving that gritty-pixel-noisy-spray look. What you see here, is just not enough photons being fired. Surface pixels missed important rays, and blur-like filters are used afterwards to remove some of the noise.

3D techniques – Radiosity & LightMaps & Baking

A less accurate, but actually much faster and (nowadays) maybe even nicer technique when taking the time/quality ratio into account, is baking radiosity lightmaps. Sounds like something North Korea would do in a reactor, but what we actually refer to, is putting our camera on a small piece of surface (say a patch of brick-wall) and render the surrounding world from its perspective. Everything it can “see”, is also the light it receives. If we do that for “all” patches in our world, and repeat that whole process multiple times, accumulating previous results, we achieve indirect light.

But again, it’s expensive. Not as expensive as photon mapping or raytracing maybe, but too expensive for real-time games nevertheless. To avoid long initial processing times, we just store our results to good old 2D images, and “wrap” them on our 3D geometry later on. Which is why we call these techniques “pre-baked”. An offline tool, typically a Map Editor, has a bake-button that does this for you. This is also what Engine22 offers by the way.

Only problem is that these pre-baked maps can’t be changed afterwards (during the game). So it only works for static environments. Walls / floors / furniture that can’t move or break. And with static lightsources, that don’t move or switch on/off (though we have tricks for that).

3D techniques - Rasterizing

Now this where I initially wanted to be with this Blog post. But as usual, it took me 4 pages to finally get there. Sorry. What most 3D games did and still do, is “Rasterizing”. And we have some graphical API’s for that; libraries that do the hard work, and utilize special graphics hardware (nVidia, AMD, …). Even if you never programmed, you probably heard of DirectX or OpenGL. Well these are such API’s. Though DirectX does some other game-things as well, the spear point of both API’s is providing graphics-functions we can use to:
·        Load 3D resources (turn model files into triangle buffers)
·         Load texture resources (2D images for example)
·         Load shaders (tiny C-like programs ran by the videocard, mainly to calculate vertex positions and pixel colours)
·         Management of those resources
·         Tell the videocard what to render (which buffers, with which shaders & textures & other shader parameters)
·         Enable / disable / set drawing parameters
·         Draw onto the screen or in a background buffer
·         Rasterize

Though big boys, these graphical API’s are actually pretty basic. They do not make shadows or beautiful water-reflections for you. They do not calculate if a 3D object collides with a wall. You still have to do a lot yourself. But, at least we have guidance now, and utilize 3D acceleration through hardware (MUCH faster).

If we want to draw our 3D cube, we’ll have to

Or something like that. Drawing usually includes that we first load & transfer raw data (arrays of colours or coordinates) towards the videocard. After that, we can activate these buffers and issue a render-command. Finally, the videocard does the “rasterizing”.

In the case of 3D graphics, this means it converts those triangles to pixels. A vertex shader calculates where exactly to put those pixels/dots on the screen. Which usually depends on a “Camera” we’ll define elsewhere, as a set of matrices. These matrices tell the camera position, the viewing-direction, how far it can look, the viewing angle, et cetera. The cube itself also has a matrix that tells its position, rotation and scale eventually. How & if the cube appears, is a calculation using those matrices. If the camera is looking the other way, the cube won’t be on the screen at all. If the distance is very far, the cube appears small. And so on. Doing these calculations sounds very complex, and yeah, matrix-calculations are pretty scary. But luckily internet has dozens of examples, and the videocard & render API will guide you. And if you use an engine like Engine22, it will do these parts for you most of the time.

During the rasterization process (think about an old matrix printer plotting dots on paper) we also have to “inject” colours. Fragment or Pixel shaders are used for that nowadays. It’s a small program that does the math. It could be as simple as colouring all pixels red, but more common is to use textures (the “wraps” remember?), and eventually lightsources or pre-baked buffers as explained in the previous part. This is also the stage where we perform tricks like “bumpmapping”.

Note these “shaders” weren’t there 15 years ago. The principles were the same more or less, but these parts of the drawing “pipeline” were fixed functions. Instead of having to program your own shader-code, you just told OpenGL or DirectX to use a texture or not, or to use lightSourceX yes/no. Yep, that was a lot simpler. But also a lot more restricted (and uglier). Anyhow, if you’re an older programmer from the 2000 era, just keep in mind shaders took over the place. It’s the major difference between early 2000 and current graphics techniques. Other than that…  some old story more or less.

Shots from the new Engine22 Map Editor. Everything you'll see is rasterized & using shaders.

So yeah, with (fragment) shaders my old friend maybe was a little bit right after all, drawing the scene pixel-by-pixel. Either how, it’s quite different than more natural (realistic) approaches like photon mapping. We rasterize an object, say a cube, monster or wall. We plot the geometric shape on the screen –eventually culling it if something was in front!-, but don’t have knowledge about its surroundings. We can’t let our pixel-shader check our surroundings to determine what to reflect, what casts shadows or which lightsources directly or indirectly pisses its photons on it. This is done with additional background steps, that store environmental information into (texture)buffers we can query later on in those shaders. For example, such a buffer could tell us what a lightsource affects, or how the world is captured at a single point so we can use it for reflections.

It’s complex stuff, and moreover, it’s fake stuff. Whether its shadows, reflective orbs or the way how light finds it way under that table; it’s all fake, simplified, approximated, guessed or simulated. But so damn smart and good that a gamer can hardly tell J Though game-engines like Unreal or Engine22 do a lot more than just graphics (think about audio, physics, scripting, AI, …) their selling spear-point and major strength is usually their magic box of tricks there. And as videocards keep getting faster and faster, Pandora’s box is getting more powerful as well. But remember kids! It’s not physically correct. Fresnel would punch me three black eyes.

Sunday, October 25, 2015

Post-mortem-review #6: Red Alert

My little brother wasn't as much a gamer as I was, but more than once, he would point me the classics. When Command & Conquer came out, I wasn't really familiar with the "RTS" genre(Real Time Strategy). Played the first Warcraft, which was fun, but Command & Conquers predecessor "Dune" didn't really caught my appetite. A top-view world with sand or pavement tiles and blocky things that would represent buildings or tanks... and worms coming out of the sand now and then. Nah, I was more of a Doom guy back then.

First RTS games were developed in te eighties. As a PC gamer, Dune (1992) & Warcraft (1994) were my first encounters.

High Tower

The first Command & Conquer was made in the early CD-ROM era. I can imagine some of you kids have never seen a CD-ROM. Well, neither did we back then. It was hot-brand-new, and beyond cool not to forget. A PC nowadays is, ehm... hell it has been a while since I bought a PC. Most people have a laptop or tablet. Asides from Auto-CAD engineers and the police-station that still operates on old mainframes, those big Desktops are a thing of the past. In the past it was pretty badass to have a “High Tower”, nowadays people will laugh at you. Anyhow, buy a laptop and it has a DVD or Blue-Ray drive, Sound card, 3D card, wide-screen with ten-zillion colours, WiFi, network card, et cetera. Of course. What kind of shop would sell you a laptop without a sound-card, or without networking capabilities? Can you imagine there was a time that none of the mentioned parts above were standard, or even existed at all?

That's right. In early 1995 -when C&C was released btw- we didn't have a sound-card at home. Without much internet, a network card wasn't exactly common either. Our monitor only had 256 colours or so. WiFi would have been a pet-bird's name back then. And CD-ROM? Hahaha. No. Only for rich people. My dad was a computer fanatic (in terms of tinkering/destroying hardware) but not THAT fanatic; if I remember well the first CD-ROM drives were sold for no less than 400$. In 1995, you could buy a new house or slave for that.

That didn't stop us from staring at sunday evening TV programs where "experts" were assembling computers and teaching us how to use them. You got to understand, 20 years ago there certainly wasn't a computer in each household. It was an expensive, and hard to grasp thing. Most people didn't work with computers yet, internet was still a baby, and for games we had a Sega, Nintendo... or almost a Playstation that used CD-ROMs btw. But, the PC was gaining popularity. The office had them, some classrooms had a (extremely old 286 or 386) computer, and besides typing spreadsheets you could now also listen music or watch a digital movie with this fantastic toy called "CD-ROM"! Quality was horrible but… Just the word itself... CD-ROM! No idea what it exactly meant, but a CD just felt so much better than those broken 3" floppy disks.

The Bigger, The Better.

And yes, it actually was a whole step forward. A floppy could store up to 1,44 MegaByte, and was terribly slow. Because of those limitations, PC games obviously had to respect some boundaries, by using low quality sounds/images, and having not too many disks in a box. I believe I once received Doom on 4 or 5 disks. Insert disk 4 of 5. Type A:\. Wait and hear the drive making digital lawn mower sounds, ggggzzzzzkrrt kkrtt krt. And then at disk 4/5, chances were big it would say "Unable to read disk". FFF*****!!

With floppies, you just knew at least one of them would be broken. CD-ROMs were more robust. And although the first drives were still slow as shit, they were fast compared to floppies as well. But far more important, CD-ROMs were about 500(!) times bigger in size. MegaBytes I mean, not actual size. A common CD had about 740 MegaBytes of space. Today, that sounds like floppies again as the average USB drive has 8GB or more. Hence we don't even use physical drives anymore. It's all somewhere in that digital cloud baby.


But back then, it was awesome of course. So far the PC had never been a very popular gaming platform, but now it revealed a secret super-weapon: Game-Movies. Thanks to all those extra megabytes, in all of a sudden, every developer equipped their games with crazy music and even more crazy movies. Not the slick photo-realistic Hollywood (in-game!) 3D renders we have now. But real (very-low-budget) actors on semi 3D (read ugly) rendered backgrounds. Silly for todays standard, but a big deal then. It really separated PC games from consoles like the Nintendo that still used much smaller cartridges. Sure, a PC was an expensive hobby, but if you were lucky enough to have one... Holy Moly!

Silly as the acting, decors, costumes and pretty much everything might have been, visuals like this were absolutely impossible on game consoles till that point.

We were lucky, when the prices started to drop a bit, dad felt life would be better with a CD-ROM drive. And our very first CD-ROM game was? Command & Conquer? No. The 7th Guest. Not just a few cut-scenes, the whole game was rendered like a movie! Truly amazing, being used to simple 2D side-scrollers mostly! Too bad the puzzles were a bit too difficult for an 11 year old though. Command & Conquer was released not much later. But… not being charmed by "Dune 2", I didn't pick up the game. But the pictures of the C&C cut-scene movies in games-magazines, were intriguing. 

The very first screenshots from C&C I saw in a games magazine.

But as said, once again I missed the ride and it was my little brother that came home with big stories about the dad of his friend playing "Command & Conquer", driving a so called Mammoth tank over 100 men, bombarding bases, deploying machine-gun towers, and so on. Whatever little dude. It sounded interesting, but it couldn't be cooler than "Crusader: No Remorse". When little, you can't try & buy every game that sounds interesting. There were only a few times a year where you could ask mom & dad for a game, so you'd better pick wisely. For Christmas 1995-1996, Quake1 or Full Throttle were on my list. Fortunately little dude persisted and asked Command & Conquer... Unfortunately for little dude, older & fatter dude would take his place behind the computer and play C&C from then on.

Though I sort of ignored the game, it didn’t have to try hard to steal my heart once its CD was loaded in our drive. Hence, this game had two CD-ROM’s! Not one, but two! That really felt as getting two games for the price of one. Two times better, two times bigger, two times more fun. One CD contained the “GDI” (goodguys) campaign, and the other CD the “NOD” (badguys) campaign. The in-game graphics weren’t that special, but the slamming electro-metal-hiphop-whatever-its-called music made by Frank Klepacki gave goose bumps right from the start. Humvees, Machine guns, guitars and cut-scenes with explosions. What could a guy ask more?

Real-Time-Strategy genre

But other than that, it was also just a very nice game to play. No fast-paced dumb shooting like Doom. This time you had to think about your moves. For those who never played a RTS genre game, let’s recap. We have a (top-down “God” view) terrain with obstacles such as rocks, trees, villages, bridges or water. In most missions you’ll have to establish a base first, and make a defence system (walls/bunkers/towers) to keep the enemy from destroying your base. Money comes from harvesting Tiberian, which is sort of large radioactive coleslaw. And since your harvester is usually somewhere outside the base, you’d better keep an eye on it as the enemy may try to destroy it. In the meanwhile you’ll be making soldiers, tanks, APC’s, buggies, artillery and get-to-the-choppers. First to recon the terrain, and later on to do a counterstrike on the enemy base. You could knock on the front door with a whole tank battalion, or maybe you prefer a more subtle approach, sneaking in, and weakening the base defence first by capturing power-plants for example.

The key is to do all those things simultaneously, and to do them right with the limited time/resources. Spend all money on making tanks, and they all get destroyed by air-units as you forgot Anti-Air units. Waste too much time on building a base while the enemy keeps pounding your units and wallet. Not spending on an extra harvester will keep the cash flow slow. And it’s usually smart to attack the enemy base from the right direction(s) with the right timing, with the right units.

That’s a RTS (Realtime Strategy Game) in a Nuttshell. But probably I didn’t need to tell you. Since C&C, hundreds of RTS games have been made. Total Annihilation, WarCraft, StarCraft, Age of Empires, Total War, Company of Heroes, Earth 2150, and the list goes on and on. Command & Conquer wasn’t the first RTS game, but probably it was the first real successful one, that made the genre popular till today. However… only one can be the king of RTS. And to me, the king is C&C: Red Alert. And do I have a good reason for it, besides being a living-in-the-past guy? Well, I think I do. 

Yin and Yang

Last years not anymore, but I have played quite some RTS games. And you know what? I didn’t like most of them. Mainly because the word “strategy” can be replaced with “make a billion tanks ASAP”. There is no thinking in most games. It’s more a like a giga-multi-tasking fest. You, you & you - go dig gold here, make 100 strong tanks, repair building, commit airstrike, deflect enemy attack at the north side, send 100 tanks to enemy base, et cetera. If women are truly better in multi-tasking, they should love RTS games.

C&C was also about doing lots of things quickly, but on a somewhat slower, more manageable level. Better balanced, more thinking, less massive. Especially in Red Alert, if you did well, you could throw over an enemy base with minimum casualties on your side. The trick is to find a weak spot, uncovered by Tesla Coils, anti-air cannons or turrets. Sneak in with spies or engineers, capture the right buildings, and destroy the base from inside out. It also paid of to deploy the right units. Soldiers are good against other soldiers, tanks aren’t. Yet tanks can punch over walls or defensive structures such as turrets or towers. Artillery can inflict heavy damage but needs to be protected. Boats can beat up a base from a safe distance, yet you’ll need to construct a shipyard & sweep the canal from submarines first. And powerful air support can only be reached by taking out anti-air units first, as planes are too expensive to mess around with.

(Almost) every unit in Red Alert was useful, from the cheapest soldier to the most expensive battleship or tank. Each defensive structure and buildings has its purpose. This is because the game-rules and balancing were done very well. There were “hacking” programs that allowed you to alter the C&C unit properties. You could make tanks more powerful, planes faster, or give soldiers long range super-lasers. In the beginning, it always annoyed me that it took ages to take out 1 soldier with a tank, and that the soldier fire-range was the same as a Super Soaker. So, we pulled the sliders and mangled some values… and… the gameplay was gone. Fun to see mega airstrikes of course, but it all just felt wrong. Too easy. Winning now was about making enough units of type X only.

Much more than downloading a virtual box today, old game boxes had charms. When seeing such a kick-ass box with 2(!) CD-ROMS, you just knew magic was inside.

Red Alert had its game-rules tuned perfectly. The prizes, the speed, the fire-ranges, the damage, the everything. Duh, of course. Of course? Looking at many of those other games I mentioned, that is certainly not always the case. Half of them are too chaotic / random, and the other half is too complicated. In a game like Total Annihilation, it really doesn’t matter if you attach the enemy base via West or East with tin-can robots, spider-tanks or missile planes. BIG numbers, that’s the only thing that counts (or a few Big Bertha’s). I’d hate to send 50 planes kamikaze style into the enemies base, but there is just no other way to beat the enemy here. So after a while, you don’t care anymore about your units. And since it’s so much and fast, you don’t get a chance to execute well coordinated attacks with small groups of specialized units. As soon as you can make stronger boats/planes/vehicles/Big Bertha’s, you’ll forget about the smaller ones.

The other half, and then I’m especially talking about the WO2 style RTS games, were too complicated for my taste. Not a whole lot of units, but many controls, too many tiny things to remember, and too many ways to screw up. “Luckily” you’ll get a whole lot of dialogs and hints, telling exactly what to do. But without those hints and voices telling you what to do, I really have no idea whether it’s better to send 3 bazooka guys to that tank, or to lay down. Either it all feels very scripted, or there is too much coincidence, timing importance and variation to predict your chances. It’s very hard to measure the actual gain when doing this or that. In Red Alert on the other hand, after a while, it gets very clear that it takes 5 units of type X to perform task Y. That sounds a bit artificial, but it’s actually nice to get a good in a game because you actually understand how it works. I prefer clear rules over “randomness”.
After some excersise, you would exactly know what a group of 10 paratroops could and couldn't do.

Back to the Soviet future

But having the rules setup correctly isn’t the only reason why Red Alert is still enjoyable today. It’s just… It’s just… it has the P from POW! Don’t know how to explain. A few hours ago I finished Red Alert2 (which can be downloaded for free –legally- via EA btw!). And though it was nice, it doesn’t come close to its older brother in my opinion. Not because of bad rules – RA2 had them pretty right as well. Where is the catch?

First of all, I liked the RA graphics and style more. It’s not beautiful, but it’s *clear*. With my bad eyes, I can clearly distinguish a Yak fighting plane from a V2 rocket truck. Red Alert 2 however has more futuristic metallic shiny weird shaped mobiles. And to make it worse, as RTS battles got more massive, they also zoomed out, making the puppets tiny. In later 3D RTS games you can zoom in though, that’s true. But let me tell one thing about 3D RTS games. I never liked their graphics either. Certainly not the early ones. Early 3D graphics were very limited, so a tank usually was just a milk-box with an ugly low-res texture on it. Can’t blame them with 200 tanks on the screen, but nonetheless the hand-drawn sprites just looked better. Obviously we can do a lot more these days, but still RTS games keep behind the high quality physics and visuals you got used to from First Person Shooters. Soldiers run like if they shat their pants, and die without ragdoll physics. Not impressed.

What I also disliked a bit about RA2, C&C Tiberian Sun, and pretty much all other C&C titles that came after, was its futuristic setting. The first C&C and Red Alert had science-fiction elements though. Orca helicopters, laser towers, chronospheres. But, subtle. The majority of units were based on somewhat real military hardware, and in case of Red Alert, with a big wink to Soviet toys. The more Sci-Fi elements were used as advanced, powerful weapons. A tasty combination. But in C&C / RA2, Sci-Fi really took over and I just missed good old bombers, artillery cannons and guys with normal machine guns.

Less Sci-Fi than the first Command & Conquer, but still a weird mix between outdated Soviet hardware and crazy futuristic techniques such as the beloved "Tesla Coil".


All right. That’s a matter of taste maybe. But I want to emphasize that clear sprites sometimes just work better than a chaotic mess of colourful laser shooting “things”. Speaking about taste, one other reason to love Red Alert, was the sound. I mentioned “POW” and “Punch” before. And I think that’s the combination of big ass sprites, heavy guitar music, rattling machine guns, and death-screams. Of course, every (RTS) game has that. But some just have it better than another. And RA had it really well. Modern games are ashamed to have a good background music track and leave it to some threatening ambient and some classical march music. It’s all so goddamn serious these days. Where are the drums and electric guitars? Where is the Hell March?!

And as for the sound effects, guns often sound like popcorn. In Red Alert guns sound like big guns with bass. As if they’re shooting 20 inch solid lead pipes all the time. Here, take that you dumb bunker. Barrels explode in huge flames, numb “bum - bum” from a Cruiser ship would mean serious trouble. And when a man dies in RA, he screams like a man. Not like a stumbling figure skater.

Do the math. My eyes were pleased. My ears were pleased. My tactical brain was pleased. The right concoction. Battles not too huge, sound of battle not too quite. Combat not too slow, choices of life and death not too hasted. Packed in more than thirty missions, as well as skirmish missions if you can’t get enough. AND… of course, interspersed with movies where a real life guy played as Stalin.

Sunday, September 20, 2015

You can talk “the Talk”, but can you loop “the Loop”?

Humans, parrots, donkeys, flies and pigs have that thing called a "heartbeat". X times per seconds it pumps around our blood. Games do the same more or less, with the only difference that an insane high heartbeat will make you drop dead, while games thrive on it. You should try to get the cyclus done at least 30 times per second.

This article is meant for beginners, explaining the GameLoop and more specifically the role of timers in a game. At the end, a small example of the Engine22 eGameLoop component is given.

Get ready for the Launch!

When programming something, there are generally three ways to execute something:
·         Single shot
·         Event driven
·         Looped

Batch files or scripts are usually good examples of a single-shot program. You start the program, a list of instructions is executed one-by-one (eventually pausing to query the operator to do X yes or no), and then it shuts down. Copying or converting files, downloading data, starting an advanced calculation in the background, unpacking files, printing a document, et cetera.

Event Driven programs are your usual Desktop Application. It starts, initializes stuff, and then waits for user-input. Press key, click a button, swipe the screen, drag & drop items, and so on. A Form-based application made in Delphi or .NET are good examples. Components like a button or listview generate all kinds of events when being pressed, entered, changed, moved-over, and so on. On a deeper layer, the OS (Windows, Linux, …) is registering input and sends your application “messages”, which are then translated to events you can chose to use.

By the way, in industrial types of programs, events can also be “interrupts”. Increment counter if sensorX generates a pulse, engage safety procedure when door-sensor loses signal, et cetera. Just saying input doesn’t have to be a mouse-click or keyboard button. A mouse-click or button-press by the way generates an interrupt in the OS as well, causing a message being sent to your application.

C’mon baby Do the Loop

And then we have the “Looped” kind of program. Grandpa telling the same story over and over and over again, and again. Until you knock his head with a wheelchair maybe. You write code, and then tell to repeat this code until X happens. Where X could be the end-of-your-program, a STOP signal, or whenever this (sub)task is considered completed/aborted. Note it executes the same code again and again, but that doesn’t mean the exact same thing has to happen. A typical scenario is that we check input parameters each “cycle” which may alter the routing (if button1Pressed then … else … ) or behaviour of other code parts.

Industrial applications, often running on PLC’s or Micro-Controllers, are usually doing “the loop” approach. Each cycle, they check input(sensors / touchscreen), run regulators, and send output (relays, pumps, valves, servo’s). It’s the heartbeat that keeps the program going. And if there is no heartbeat (stuck in a loop, error raised, hardware issue), the “Watchdog” will time-out and reset the chip.

And games aren’t much different. Note that applications can combine both events and (multiple!) loops, but again the “Game Loop” is what makes your game drawing, moving, sounding and doing stuff continuously.

You probably know how to make an ordinary loop:

            while not ( flagSTOP ) do begin
                                runTask(  ….  );

There is one little-big problem with this code though; no matter how simple “runTask( … )” is, your CPU will go 100% and everything else freezes. It’s trying to execute “runTask” as much as possible, no pausing, no breathing-space for any other application.

This piece of code has two problems. We have no speed-control (the heartbeat pacemaker going mad). And since it’s executed in the main-thread, this means it will block other main-thread tasks in the same program, such as refreshing the screen or handling button-clicks. As for the speed-control, one seemingly simple solution would be using a Timer component. Delphi, .NET, Qt, Java, they all have Timer components that can run code each X milliseconds. The “idle” time between two cycles is then available for other stuff.

Real-time TV

Problem solved then? Hmmm, not quite. At least, it depends on the accuracy we want. See, your default Timer isn’t very accurate. Why? Because the Windows OS (don’t know for Linux) itself doesn’t have a very accurate timing. Why? Because Windows OS isn’t designed as a “Real-time” operating system? Why? Because Windows doesn’t need to be. Why? Because Windows typically isn’t used on time-critical hardware, such as a vehicle ECU or a packaging machine controller. Why? Ah, drop dead.

Though we all love a good performance, 99,9% of the Windows applications isn’t time-critical. People won’t die if we miss a frame, and we don’t control a packaging machine that screws up if its pneumatic valve is powered 10 milliseconds late. Machinery often requires a real-time system to guarantee the same predictable results over and over again (24/7!). Which is why you shouldn’t use a Windows computer for that. Why? I told you why, because the Windows OS isn’t designed for real-time applications. It does not guarantee that your program will execute the loop again over 872 microseconds. On a typical Windows computer, we have dozens of other programs and background tools running at the same time, which can all claim the CPU or memory, so we simply can’t give any guarantees.

Now a game isn’t time-critical in the sense that you will get hurt if the framerate drops from 60 to 30 in all of a sudden. Although… I’ve seen kids on Youtube that went ape after getting shot in a Death-match game due a computer hick-up… Nevertheless, we want a smooth experience. If your television refresh rate fluctuates between 15 to 100 FPS all the time, you’ll be puking after five minutes. Like in machinery, we want to run our game at a certain pace, and guard that interval.

Say we use a Delpti TTimer and put the interval on 16,66 milliseconds. That means the timer-event is triggered 1000/16,66 = 60 times per second (thus FPS = 60). In theory. We’ll use this event to execute our game-loop. In this loop we could:


Now since we have only 16,66 milliseconds between 2 cycles, that is quite a lot to do in a short time!! Yep, true, but you’ll be amazed what a computer can do. Don’t forget about half of the work is done by another processor-set, the video-card, which is on steroids. The picture above implies everything happens in sequence, but it's common to have some multi-threading going on to execute these sub-tasks in parallel. While the video-card is drawing, you could update audio for example. You’ll also learn that a lot of programs out there are slow as shit because bad programming. If your photo-realistic game can poop 40 frames per second, there is no reason why some silly editing program takes 5 seconds to do a single simple task.

A good engine is a powerful beast, doing thousands, no MILLIONS!, of calculations every second. The key to get this fast is not really doing formula’s in a very optimized way, but avoiding stuff that doesn’t have to be calculated (every cycle). Don’t do extensive searches if you could also do that just once during the loading-phase. Use smart algorithms to to reduce searching lengths. Don’t calculate advanced physics for an object miles away. Don’t animate a 30 legged centipede that is somewhere behind you. Anyway, optimizing is another story.

60 Frames per Second is a nice strive, though I’m already happy if the game just keeps above the 30 FPS line. Above thirty, the human eye thinks animations are smooth, but lower framerates will appear choppy. How many frames you can reach, depends on what you’re doing (how complex & how many entities), and how fast the computer is obviously.

If we’re trying to do more than the computer can handle, it means the framerate will drop and our timer will be executed repeating without any delays between the cycles. This can be temporarily, for example when entering a heavy scene, or if the computer is doing a Virus-Scan on the background (slowdowns aren’t always our fault!). Doctors making rush-hour on the ER, more bloody patients coming in than we can handle. If it’s structural, meaning we never reach our desired framerate and the CPU hitting 100% all the time, we should consider lowering the target framerate (allow less patients), doing less code/optimize (train our doctors), or switch to a more powerful platform (a bigger hospital).

Take a break

At the other hand it may happen we can easily perform our tasks in the given timeframe. If the timer runs at 58,8 FPS, our timeframe is 1000/58,8 = 17 milliseconds. If doing our GameLoop only takes 10 milliseconds, we have 7 more “free time” milliseconds. Which is great, because this allows us to do other stuff, it gives some room to other background applications, and otherwise at least the CPU isn’t running 100%, making a lot of noise all the time.

Here is the tricky part though. After you finished the Game-Loop, you should check how much time that took, and how much take you can rest before taking the next step. A Delphi TTimer does that, but not very accurately, because non-real-time Windows isn’t generating timer-pulses very precisely on the background. That also causes our beloved “sleep( )” function to be unreliable. Calling “sleep(1)” may actually put your thread in bed for 10 milliseconds or so. So, how to keep a steady framerate then? 

Engine22 “eMainloop” class

There are several High-Precision timer components for Delphi, and I’m sure the same thing is available for .NET or any other language. Engine22 also provides the “eMainloop” class (E22_Util.Mainloop), which is sort of a timer. You set it up by giving a desired target-framerate -which is basically a maximum speed-, and a Callback function. This callback function is called every time the eMainLoop object triggers. So, typically you execute your game (or whatever it is you’re making) stuff there.

                Looper : eMainloop;

                Procedure TForm1.initialize();
self.looper             := eMainloop.create( self.handle );
self.looper. setTargetSpeed( 60 );
self.looper.setCallback( self. gameMainLoop );
self.looper.enabled := true;

                procedure TForm1.gameMainLoop( const deltaTime : eFloat );
                end; // gameMainLoop

So “gameMainLoop” gets called, 60 times per second (hopefully) in this example. The elapsed time between 2 frames, in theory 1000/60 = 16,66 ms is given as an argument you can use. How to? Check the “DeltaTime” part at the bottom of this article:

How it solves the timing issue? By not using the Windows OS timer messages, but using the Windows vWMTick_ASAP signal instead. This one is given mega-fucking-fast. The application is allowed to process messages every time we receive a tick, and we measure the elapsed time using the Windows QueryPerformanceCounter() function. This function returns an ever incrementing tick-counter, which can be converted to milliseconds by dividing it with the clock frequency, which you can get via QueryPerformanceFrequency( ptrFrequency ). If the elapsed time exceeds our targetframerate, we call the given callback, “gameMainLoop” in this case.

Enough for today. For people with some experience, this whole story probably sounds all too obvious. But for a newcomer, it's probably good to understand what's going on. After all, game-code doesn't quite look like a common (Event driven) Desktop program. And since the looping/timer is such an essential thing, you'd better not rely on the standard Timer and dig a bit deeper in order to get control.

Saturday, September 5, 2015

Back to school: 3D Vectors

This (beginners) tutorial is written for Engine22 users, but also if you just want to know more about making games, engines, and 3D graphics. More tutorials will follow for sure, and can be found on (and at the time of writing, the link isn't up yet!). Have fun!

Remember Vectors from physics classes? Don't worry, neither do I. No but seriously, I'm a loser when it comes to math. Some parents say their kid is like a sponge. I'm more like bowling ball, without holes. If I don't apply it immediately, my brain does a toilet flush and the gained knowledge exits the body through farts or nose pickles.

Apply it immediately”... what healthy 16 year old applies Pythagoras or Fresnel lens formula's for his daily problems? Only Newton's gravity law seems to apply sometimes, when falling drunk of your bike. E=MC2 in your face, stupid. So, there we are. In a classroom, thinking about the next part in Half life(1), staring at girls boobs, drawing idiotic doodles on a piece of paper, waiting until you can finally leave that sweaty moldy reeking place. Somewhere in the background, an old teacher with glasses is drawing "arrows" on the chalkboard. Whatever man.

But who would have thought that those arrows are called "Vectors", and that I would use them on a daily base, 17 years later? Just as realistic as using the leapfrog you learned at gymnastics.

As said, I'm just not that good at math so I won't be able to explain all the fine details about the math behind vectors. Then again, not being a professor, I can hopefully teach you a few things in human-language. While I'll put on my glasses and walk to the chalkboard, please stop chewing bubble-gum, stop doodling on your note block, and pay attention class. It's not that hard really, and if you want to make a game, you'd better stop watching your neighbour’s boobs, and focus on the chalkboard. Ahum:


Earth is flat, and coordinates in space are made of 3 components: x,y and z. Don't believe me? Look at your finger. Move it to the left. Now it moved into a negative X direction; your finger's X-coordinate decreased. Move up. Your finger's Y coordinate increased. And how about that Z component then? Simple, just move your finger backwards, away from you. Z, not from Zorro but "depth", coordinate increased. From your perspective at least. In your bedroom door opening, your mom is looking at you now, and thinking what the hell you're doing. From her perspective, that finger moved into different directions, and she is about to call an ambulance.

So, a position, also called "point", can be defined as {x,y,z}. A 3-component struct in programming terms, like this:
// pseudo Engine22 notation
eVec3 = record
       x,y,z : eFloat;
And a point can be in different "spaces". In general, we have "local space" (or "model space") and "world space". The finger example explained the coordinates, relative to your own position. Move it up, and Y increases. If you were the centre of the universe -as your mom always says-, your body would be at {x:0, y:0, z:0}. But you're not. For a Chinese on the other side of the world, your finger is going down instead of up.

That is... if the Earth was the centre of the universe. Which isn't either. In real-life, coordinate systems are always sort of local. It really depends what you take as a centre point, and also what direction is accounted for "forward", "up" and "left". If you hang upside-down like a bat the whole day, "up" might get a different definition.

Games are a bit easier. We just take a random point as the centre. Or well, random... On a GTA map, you may decide the bottom/left corner, at sea level, would be {0,0,0}. For Tower22, the centre of the ground floor is the centre of the "world". Game-world. When going up, Y goes higher. But there also games/3D programs that take Z as "up". It's arbitrary, and it doesn't really matter just as long your 3D models and programming math all accept the same rules.

In a game-context, "local" coordinates usually apply on/within a 3D model. So we make a large outdoor area. And we place assets in them. Cars, light-posts, garbage containers, trees, cats, et cetera. Each object gets a "world coordinate". However, earlier while creating these objects (say we were modelling a car), we didn't know if/where and how these objects are placed within the world. We just pretend the world is gone, and now the centre of the car itself becomes the "centre of the world". So, probably the handbrake would be somewhere near that centre. The front bumper, is placed forward from the centre, thus having a larger Z coordinate. The rear bumper, "behind" us, would get a negative Z coordinate.

Now when we place this car in our world, we can rotate it 180 degrees. In local-car-space, the handbrake is still the centre, and the bumper is still forward at a higher Z coordinate. But in world-coordinates, you'll get a whole different picture.
So, in practice, every object in your world gets an "absolute", world-coordinate. Physics,  movement, placement, and all that kind of stuff will be calculated with world-positions, world-vectors, and world-matrices (more about that later). But sometimes, you also want to do something in local-space. Or convert from local- to world space, or vice-versa. Example. Your NPC has a sniper rifle, and wants to blow off your head. That happens. Your "head" is a sub-part of your "player-body" object. The NPC knows your player world-coordinate. But next, the exact head-position depends on what the player is doing. If he is in "prone" pose, the head will be low at the ground. If he is licking his own balls, his head will be somewhere in the middle of the whole body-object. The body-object, which drives such animations, knows the local head position. In order to make a good shot, blowing of your head while you were walking on hands with your legs around your neck, the local head coordinate needs to be transformed to world space.

Jesus, that sounds hard. Yeah, it kinda is, and I won't explain the math now. But to do so, you'll need vector & matrix transformation calculations. Which are fortunately very common in any game engine, so you don’t have to reinvent the wheel. For now I'll just illustrate what can be done
with points. And I think... that I'm pretty much done. But to finish this story, let's throw some random (Engine22) code examples to get an idea:

                // Misc. Point code
                function makeYourPointPlease( const x,y,z : eFloat ) : eVec3;
                               result.x := x;
                               result.y := y;
                               result.z := z;
                procedure movePointToTheLeft( var pnt : eVec3; const distance : eFloat );
                               pnt.x := pnt.x - distance;
                function distanceBetween2Points( const A, B : eVec3) : eFloat;
                var x,y,z : eFloat;
                          x      := A.x - B.x;
                          y      := A.y - B.y;
                          z      := A.z - B.z;
                         result := Sqrt(x*x+y*y+z*z);



A point is just a... point, somewhere in a space. And you mainly need them for
- A.I. navigation
- Physics (movement, jumping, collision detection, ...)
- Tell video-card WHERE to draw
- Animations

Now to the Vector. In terms of programming, we can notate it the same. In fact, points and vectors are often the same struct or class in engines. In Engine22, a "vector" can be a vector(duh), but also a point or RGB colour. Anyhow, vectors can be visualized as arrows. Yes, that old man with glasses on the chalkboard wasn't fooling after all. A vector defines one or two things:
                - Direction: arrow point to the left, upwards, a bit backwards, ...

                - Force: The vector length. The longer the vector-arrow, the faster it moves into that direction

Note that "normalized" vectors do not define a Force or Strength, only a direction. A normalized
vector is made in such a way that the Force/Strength/Length is always 1. So a vector pointing to the left, would be noted as {x:-1; y:0 z:0}. An upwards vector would be {x:0 y:+1 z:0}. Polygon “Normals” (the direction its facing), is an example of a normalized vector.

The Matrix

Normalized vectors are used a lot for defining directions in drawing, physics, and rays in graphics(shaders). You see, objects usually don't only have a point, but also a certain direction. Looking from helicopter perspective, you could rotate a car 360 degrees. This angle can be encoded into a direction vector. Although you would need 2 more vectors for the full picture, as you could also roll and flip this car, in case it crashes or something. This is where Matrices are used for. A (4x4) matrix defines 5 things:
                turn / roll / pitch  rotation vectors
Basically a matrix is constructed from multiple vectors. In Engine22, every asset has a matrix. When you place a new object into your world, you define its position, you can rotate it, and eventually scale (shrink / enlarge) it. In other words, you'll be adjusting its matrix while moving it around your mouse or keyboard.

This same matrix is fed into the physics system, so it knows where everything is, and also
shaders will get this matrix so they can position your object correctly on the screen (if visible
at all).

And back to Vectors

Matrix-math is quite a bit harder than vector-math, so let's spare that for another time, and continue this tutorial with some practical programming examples that show how these direction vectors can be used. Let's move bitch:

                procedure movePointUpwards( var targetPoint : eVec3; const unitsUp : eFloat );
                               targetPoint.y := targetPoint.y + unitsUp;
                               // Note it would move down if you give a negative number

                procedure movePoint( var targetPoint : eVec3; const forceVector : eVec3 );
                               // Assume the forceVector is NOT normalized,
                               // thus having a length/strength as well
                               targetPoint.x := targetPoint.x + forceVector.x;
                               targetPoint.y := targetPoint.x + forceVector.y;
                               targetPoint.z := targetPoint.x + forceVector.z;
                procedure movePoint( var targetPoint : eVec3; const direction : eVec3; const speed : eFloat );
                               // Direction vector is normalized here, multiply with speed
                               targetPoint.x := targetPoint.x + direction.x * speed;
                               targetPoint.y := targetPoint.x + direction.y * speed;
                               targetPoint.z := targetPoint.x + direction.z * speed;

Units in 3D space

So... if we move a point "20" to the left... then how far did it move exactly? My old physics teacher would outrage when writing a number without its corresponding unit. The mass of this block would be "10" mister. "10 what?! 10 ounce? 10 grams? 10 donkeys?!". "Kilograms mister". "Then say so". And of course, he was right. But still an asshole.

The guys at OpenGL or DirectX didn't listen very well to their teachers though; there is no
pre-defined unit for 3D coordinates. They basically say: "You figure out.". So, if we move
that point "20" to the left, we should decide ourselves if those are inches, meters,
nautical sea miles, or donkeys. In Engine22, "1" = "1 meter". But in another program it could
just as well be millimeters. So, it's important to pick 1 standard for your unit system, and eventually scale up/down imported models that come from a different 3D package. Lightwave also uses meters, but 3D Max centimetres if I'm not mistaking, so an imported model would be 100x bigger if you forget to downsize.

                procedure checkInput( );
                var movementSpeed : eFloat;
                               movementSpeed := 3.0;  // meters
                               // Move our player with the arrows, over the X and Z axis
                               if ( keyboard.leftArrowPressed ) then
                                               movePoint( player.position,  vec3(-movementSpeed, 0,0 ) );
                               if ( keyboard.rightArrowPressed ) then
                                               movePoint( player.position,  vec3(+movementSpeed, 0,0 ) );
                               if ( keyboard.downArrowPressed ) then
                                               movePoint( player.position,  vec3(0,0, -movementSpeed ) );
                               if ( keyboard.upArrowPressed ) then
                                               movePoint( player.position,  vec3(0,0, +movementSpeed ) );

Stop. DeltaTime!

The procedure above has a little problem. Assuming that your game runs at 60 frames per second. And you'll be checking input every cycle. Thus "checkInput" would be called 60 times per second. Since we move "3.0 meters" every time, your player will go in warp-drive; holding the key for 1 second would move him 3.0 * 60 = 180 meters per second! That's a bit too much, don't you think.

Now we could simply reduce the "movementSpeed", to avoid super-human speeds. 0.03 for example would lead to 0.03 x 60 = 1.8 meters per second.

But this still stinks. What if you run your game on slower computer, that can only reach a lousy 20 FPS at times? 0.03 x 20 = 0.6 meters per second. In the worst case, your FPS fluctuates, because you're downloading internet porn at the same time, causing hitches. Or how about this? You'll buy a quantum computer that easily reaches 1000 FPS. Now suddenly your guy moves with 30 meters per second!

Ever tried old DOS games on an emulator? You may have noticed that some games seem to play in fast-forward. That's because they didn't guard their speeds. One trick is to introduce a maximum FPS. Another (better) trick is to calculate "DeltaTime". In Engine22, most "update()" functions will get a "DeltaSecs" argument with them; the elapsed time in seconds between the current and previous cycle. So if we're at a steady 60 FPS, DeltaSecs would be 1/60 = 0.0166667. Now check this:

                procedure checkInput( const deltaSecs : eFloat );
                var movementSpeed : eFloat;
                               movementSpeed := 3.0 * deltaSecs;     // meters PER SECOND
                               // Move our player with the arrows, over the X and Z axis
                               if ( keyboard.leftArrowPressed ) then
                                               movePoint( player.position,  vec3(-movementSpeed, 0,0 ) );
                               if ( keyboard.rightArrowPressed ) then
                                               movePoint( player.position,  vec3(+movementSpeed, 0,0 ) );
                               if ( keyboard.downArrowPressed ) then
                                               movePoint( player.position,  vec3(0,0, -movementSpeed ) );
                               if ( keyboard.upArrowPressed ) then
                                               movePoint( player.position,  vec3(0,0, +movementSpeed ) );

Problem fixed. We multiply our original "3" with the elapsed time, so it gets a small number, 0.05 if the FPS was 60. And how much is 60 times 0.05? Exactly, 3. We moved 3 meters over a second.

Vectors in Engine22

You saw some of the basics. How to set a point, apply some movement, a bit about local versus world-space. There is a lot more you can do with vectors and matrices though. And therefore one of the basic elements each game-engine should have, is a Vector Library. And so does Engine22 of course. You already saw the "eVec3" type a few times above, which stands for "eVector3". That thing is being used all over the place, and has a bunch of handy help functions.

eVec3 is just a record with 3 floats. Note there is also an integer/byte/float32 variant, as well as a eVec4 type that has an extra W component. One of the reasons Engine22 won't work anymore in Delphi7, is because I made thankful use of record operators & functions, which wasn't possible in the old days. The means you can call functions and do math with vectors:

                var v1, v2, vResult : eVec3;
                               vResult := v1 + v2;           // Sums up (v1.x + v2.x, y+y, z+z)
                               vResult := v1 - v2;            // Subtracts (v1.x - v2.x, ...)
                               vResult := v1 * v2;           // Multiplies (not CROSS!)
                               vResult := vec3( 10, -4.2, 90);     // Constructor
                               vResult := v1.norm();    // Get normalized vector
                               string  := v1.toString();
                               vResult.fromString( "10 -4,2 90" );
                               x             := v2 );// dot product
                               distance:= v1.dist( v2 ); // Distance between v1 and v2
                               lookDir := v1.lookat( targetPosition );    // Gets direction towards target
                               length  := v1.len();          // Get vector length/strength
                               if ( v1 = v2 ) then vectorsAreEqual;
                               if ( v1 > v2 ) then v1IsStrongerThanV2; // Compare forces

Another nice feature is that you can use the same vectors for RGB (or eVec4 for RGBA) colors. Instead of typing "vector.x", you can type "vector.r" as well. It does the exact same thing, but makes more sense in terms of writing. Vector colors are often used as input parameters for shaders and graphics. Note that 0 stands for black, and 1 for 255 (white) here, as we use a floating point notation instead of bytes.

And about 3D models, did you know that...

3D models (the stuff you make in 3D Max, Lightwave, Blender, Maya, ...) are basically just arrays of vectors? A polygon is made of (typically 3) "vertices". Where a vertex could have a position, a texture coordinate, a normal-direction, maybe a tangent, maybe a weight? So, writing it down:

                Vertex = record
                               position               : eVec3;
                               textureCoord    : eVec2;               // Only 2 coordinates (U and V)
                               normal                 : eVec3;
                               tangent                               : eVec3;
                               weight                 : eVec3;
                ModelMesh = class
                               vertexCount      : eInt;
                               vertices               : array of Vertex;

It's not exactly how the Engine22 VertexArray classes look like, but it's not far away from that either. When loading a 3D model, you'll be making arrays of vectors. When rendering the model, you'll be pushing those arrays to your videocard. Just keep that in mind.

It's not exactly how the Engine22 VertexArray classes look like, but it's not far away from that either. When loading a 3D model, you'll be making arrays of vectors. When rendering the model, you'll be pushing those arrays to your videocard. Just keep that in mind.

So much stuff you can do with them. The E22_Util.Base.Vector unit is one of the most common included units. And even if you're not a math genious, you'll get comfortable with them quickly.

To conclude, our deadly sniper headshot:

                procedure sniperHeadShot( targetEntity : eES_Entity );
                var         localHeadPos    : eVec3;
                               worldHeadPos : eVec3;
                               localGunPos      : eVec3;
                               worldGunPos    : eVec3;
                        targetDirection : eVec3;
                        targetDistance  : eFloat;
                        bulletPos    : eVec3;
                        traveledDist              : eFloat;
                               // Get absolute head position
                               localHeadPos    := targetEntity.getLocalPoint( HEAD );
                               worldHeadPos  := targetEntity.getMatrix().localPointToAbsolute( localHeadPos );
                               // Get our gun position
                               localGunPos      := sniper.getLocalPoint( GUN_NOZZLE );
                               worldGunPos    := sniper.getMatrix().localPointToAbsolute( localGunPos );
                               // Get the (world) direction and distance between our gun and the head
                               worldGunPos.distAndDirectionTo( worldHeadPos, targetDirection, targetDistance );
                               // Simulate bullet path
                               bulletPos            := worldGunPos; // Start here
                               traveledDist       := 0.0;                   // Traveled distance so far
                               while ( not collided ) and ( traveledDist < targetDistance ) do
                                               // Check if our path is clear
                                               collided := world.rayTrace( bulletPos, targetDirection, bulletSpeedMS * deltaSecs );
                                               // Move forward, with the speed of a bullet
                                               bulletPos := bulletPos + targetDirection * bulletSpeedMS * deltaSecs;
                                               traveledDist := traveledDist + bulletSpeedMS * deltaSecs;
                               end; // while
                               if ( traveledDist >= targetDist ) then begin
                                               // Reached the target

Of course, that's bullocks code, but it does give an impression of how to work with vectors. The vector-related functions shown are in E22 at least.