With coding, I have always been uncomfortable with abstraction, meaning letting the computer do the work for me without really understanding whats going on under the hood. Sure I can tell the computer to draw a cube at such and such coordinates, but how did it know HOW to draw the cube, and how the hell did it project this 3D object onto my flat 2D monitor?
Mappers need not concern themselves with such things, as the beauty of their work is dependent on the landscape, not the raw understanding. But someone looking to modify the game behavior will quickly find themselves lost in a world of vectors and quaternions.
Since playing (and mapping) DukeNukem3D (I was very young) I have been plagued by my lack of understanding of the 3D world. Half-Life made it all the worse. It wasn't until this year while day-dreaming during class that a thought struck me that a 3D object could be projected onto a 2D surface (in our case, the computer screen) by drawing lines from every corner of the object to the perspective point. A plane would sit in front of the perspective point, and where the lines intersected the plane would correspond to the pixel that corner would occupy on the screen.
Each corner would know what other corners it was attached to, so lines could be drawn after the projection.
To see if this would work, it seemed easiest to go ahead and choose corner coordinates forward of the XZ plane, and derive parametric equations using the XZ plane as the plane of intersection, with the perspective point being slightly behind it (in the negative Y section). This led to the equations:
Where the X, Y, Z coordinates of the perspective point are P1, P2, P3
and the X, Y, Z coordinates of the corner being projected is V1, V2, V3
(After testing, this only works directly behind the origin. Y is the only coordinate the axis the perspective can move along without distorting the object)
Following this, I used the Arduino Development Board in conjunction with the TVout library to create a 3D cube on my old TV (Inspired by the rotating cube that came with the TVout library sample).
Its not pretty, its not that impressive, but how can I understand goldsrc if I can't even make a cube from scratch?
3D graphics are really taken for granted these days, and I admit that goldsrc is not the prettiest to look at compared to modern engines. But goldsrc is extremely beautiful and innovative, and perhaps its not until you attempt to write 3D from scratch that you are able to realize that.
I just know how to use the tools that control DirectX and OpenGL.
That's a neat journal man
I've always wanted to do some sort of 3D rendering on my calculator. Maybe now i'll try.
I do have to admit though; for an engineer, my math skills are lacking.
Also, I haven't seen a TV with integrated DVD player (or VHS) in like forever.
Yes math definetely falls in the "If you don't use it, you lose it!" category. Before I used to kick myself in the leg for not remembering formulas and such, but now I'm getting more comfortable with just knowing what can be done. We can always look up the formulas later (Which I had to do several times in this!)
I appreciate the feed back from you guys, and DiscoStu, that TV stands in my wife's womancave as a testament to an era gone by. I often retreat there to escape the sounds of "The Vampire Diaries" blaring from the living room TV. (The Mancave has no TV...)
Abstraction is very much the key when doing 3D programming. Not only for your data structures, but for the rendering code as well. Even if you are using OpenGL/DirectX you need to know about the low-level stuff like this so you can create your projection matrix. It gets even worse when you start looking into shaders, vertex arrays, frame buffers, and other stuff that introduces even more levels of abstraction between the programmer and the screen output. I still haven't got my head around all the complexities, there's so much stuff to learn
The Arduino thing is interesting, you say only a library is needed and you can output from the digital/analog pins directly into the TV?
I have an Arduino Uno myself.
I probably will never be able to wrap my head around 3D especially OGL and how it works.
Xylem, I have just recently got the goldsource SDK running and able to compile. So far I have only tinkered with it by making Barneys shoot gibs and making it so pressing 'E' teleports the player about 30 feet in the air, complete with green fade and soundfx (Simple and cheesy, but its a start). The sourcecode was intimidating at first but I feel like I'm picking up on it pretty fast.
So what does OpenGL and Direct3D do for you at the lowest level? Do you specify vertices and makes faces or something like that?
The_(c)Striker, yes, there is a library called TVout, and it only requires one wire and 2 resistors to get it up and running. Good library for simple games like pong or tetris.
Display control on hardware level. What does an instruction do when its called and how does hardware react, what happens with the hardware it self when it gets an instruction, how does it react electrically...
A lot of interesting shit.
I my self have been experimenting with MS DOS graphical programming and have gotten my hands onto some interesting literature.
But i haven't attempted making any programs just yet due to not completely understanding all major functions of C language that i am currently learning on my own. I am currently at pointers.
I am learning C on an IBM 380ED.
For a good start on how to use OpenGL, check out this tutorial: http://www.arcsynthesis.org/gltut/
I don't know DirectX so I don't have any good resources for it on hand, but I know that there are plenty out there.
A vertex shader is used to transform your world geometry into 2D screen-space. You should look into homogeneous matrix transformations to understand how rotation, scaling, translation, and perspective transformations can be conveniently packed into a single matrix transformation. You use these transformations to position your cube in the world where you want it. Then you transform that world matrix by a view matrix, which moves the entire world in front of the camera. It's interesting to note that when matrix transformations are used, moving the entire world around the camera is the same amount of work as moving point in the first place. Then you transform the positions by a perspective transformation matrix. A perspective transformation projects 3D points into 2D screen clip-space.
After each of the vertices are transformed into screen clip-space they are rasterized. Points define triangles, but these are just points. A rasterizer takes the points and "fills in between the lines" to form a collection of pixels that form the rendered primitive. These pixels get sent down into a pixel shader, which decides the final color of the pixel as it is written to the screen. Texturing and lighting computations are performed in a pixel shader.
Aside from the rasterizing, DirectX / OpenGL provide tons of abstractions that make rendering tasks more efficient. These abstractions are mostly about controlling GPU memory.
That's a mini-description of the rendering pipeline that OGL/DX use.
I would absolutely love to learn how to do that.
But I probably need more schooling. Sigh. If i won the lotto, I would go back to school to learn how to do all this.
Funny that; I would spend my money on all the education I would ever want.
One can dream.