Test Vol 1 Jun. 2014 | Page 17

3. Implementation – Defining the structures and other rules 3 Implementation Having talked about spaces and having derived basic functions and algorithms for transforming and projecting we can finally create proper algorithms for rendering. Using the knowledge gained from the previous chapters we can implement methods for drawing floors, walls, sprites and whatsoever. As talked about in , these algorithms and functions will be created for engines which allow to change the colour of an individual pixel on the screen. This chapter focuses on three main rendering algorithms: 1. Floor and ceiling rendering 2. Wall rendering 3. Sprite rendering 3.A Defining the structures and other rules Before getting into working out the algorithms, some structures and classes must be defined. All the code will be written using C-like styling, so the same code can be used in many other languages, like Java, C, C#, etc. Note that an actual implementation may depend on the engine you use. Bitmaps Firstly, we need a structure to store pixel data for pictures and for screen. In this implementation pixel’s colour is represented as ARGB in a single 32-bit integer. That way we can simply have an array of integers to represent a screen or a texture. From now on this pixel’s storage will be referred to as a bitmap. The structure looks like this: 1 struct Bitmap 2 { 3 int[] pixels; 4 int width, height; 5 } // Pixel data. Stored as ARGB. // Width and height of the bitmap The loading and handling of the bitmaps is up to the engine and main implementation. The camera Since the camera is an object in the world space, it has its position and orientation. Since we agreed on rotating only about the y axis, the orientation can be defined as an angle. Also, we have znear and zfar in clip space. And now we also need to give them a value. But what values should we assign to them? Theoretically it doesn’t matter; as long as