Disclaimer : The shader architecture of computer graphics is a vastly
technical subject. But trust me, I will guide you the easy way so you do
not need to know all the technical jargon & still have fun reading.
Below is a scene with varying degree of shaders :
A shader is a program used to calculate rendering effects with high degree of flexibility. Simply put, shaders create effects in a 3d environment. Whenever you draw an object with a pencil, you put some shade in it afterwards. So while you are at it, you are adding effects with your shading, to make the object seem darker, grainy, realistic or whatever. That's what shader does. Shaders are also built in a graphics processing unit (GPU) for faster shading calculations. There are mainly 2 types of shaders. Now, before we continue, these separate shading architectures is a thing of the past. It is now all part of a unified shader architecture & the shift from former to the latter is considered one of the most significant evolution in the field of graphics technology and we'll get into that in a while.
1. Pixel Shader: These compute colors & other attributes of each pixel. Their operations range from applying a lighting value (lights on objects), rendering shadows, bump mapping (a technique which makes blocky surfaces or terrains look realistic) & other phenomenon. Simply put, pixel shaders are something that operate on pixels.
2. Vertex Shader: These are a bit more complex. A vertex in computer graphics is generally something (a data structure) that's used to describe a point in 3d space. Displayed objects are composed of arrays & vertices describe the location of these objects. Let's make it simpler. In geometry, vertex is a point that describes corners or intersections of geometric shape. Now in 3d space, every 3d object has geometry (obviously). So vertex shader does something to the objects. So what is that something? If a pixel shader computes properties of pixels such as color, lighting value, etc then a vertex shader does the same to objects, right? Wrong. The primary operations that can be done by vertex shaders on 3d objects are displacement & transformation. These manipulate object properties such as position, texture co ordinate & color but cannot create new vertices. Long story short, whenever we're dealing with any 3d object from buildings to bricks, Mr. Vertex shader is behind the scene.
Now take a look at the screen:
This scene requires pretty complex calculations on the shader department. Now can you tell which parts of the image is using pixel shader & vertex shader? It's really simple.What are the 3d objects? Let me see, the trees, the rocks, the distant mountains & the houses. So these definitely uses vertex shaders. Then what are the remaining parts? The (beautiful) water & the fog. These uses pixel shaders. But wait, it isn't over yet. What about the lighting effects? The lighting on the water is called subsurface scattering which again uses pixel shader. The shading on the trees is obviously done using pixel shaders, phew...ok move on.
In earlier generations of GPUs, there were separate number of Pixel & Vertex shaders built in the hardware and these were considered sufficient. Say in a graphics card there are 16 pixel shaders & 8 vertex shaders. Now consider this scene:
As you can see, there are very little task for the pixel shaders. They are only needed for filling the sky. But it's very stressful for the vertex shaders as all the destruction & burning objects require much higher polygonal (smallest unit in a 3d object) calculations. The highly detailed burning car alone takes more than half of the vertex shaders of the card. As a result, there will be slowdown & the scene will lag. In case of that card, out of 16 pixel shaders very little is being used, but the vertex shaders are being pushed to their limit & could certainly use some help. But the remaining, unused pixel shaders are showing the finger to them. This poses a computational problem as if any one of the shaders are over utilized, the whole performance graph can come crashing down.
Enter DirectX 10 (a graphics software library) which brought the concept of Unified Shader where each individual shader can perform both vertex & pixel shading operations. Problem solved. No more worrying about one part of the GPU being idle & the other part choking. No more seperate shaders. Newer GPU's all shipped with this unified architecture. The name of these newly improved shaders became Stream Processors. These are fully programmable, so if there are stream processors to spare, they can be used in other computational processes. Evidently, the unified stream processors are much more capable & versatile than their predecessors. As time went on, with newer fabrication processes, GPU's featured increasing number of SPs & as of this writing, higher end GPU's come equipped with triple figure SPs.
We're not done yet. DirectX 10 brought with it a whole kind of shader called Geometry Shader. It's functions are described below:
Geometry Shader: These can generate new graphics primitives (atomic or simplest geometric objects) such as points, lines etc. Geometry shaders are used to perform really complex operations such as Tesselation (multiplying the number of polygons on an object to make it look more realistic), creating heightened non symmetrical surfaces to mesh complexity modification. Geometry shaders are used to produce ultra realistic visuals like the one below----
There you have it, folks. I tried my best to explain shaders to you. Hope you liked it.
If there are some questions need answering, sound off at the comment section.
Below is a scene with varying degree of shaders :
A shader is a program used to calculate rendering effects with high degree of flexibility. Simply put, shaders create effects in a 3d environment. Whenever you draw an object with a pencil, you put some shade in it afterwards. So while you are at it, you are adding effects with your shading, to make the object seem darker, grainy, realistic or whatever. That's what shader does. Shaders are also built in a graphics processing unit (GPU) for faster shading calculations. There are mainly 2 types of shaders. Now, before we continue, these separate shading architectures is a thing of the past. It is now all part of a unified shader architecture & the shift from former to the latter is considered one of the most significant evolution in the field of graphics technology and we'll get into that in a while.
1. Pixel Shader: These compute colors & other attributes of each pixel. Their operations range from applying a lighting value (lights on objects), rendering shadows, bump mapping (a technique which makes blocky surfaces or terrains look realistic) & other phenomenon. Simply put, pixel shaders are something that operate on pixels.
2. Vertex Shader: These are a bit more complex. A vertex in computer graphics is generally something (a data structure) that's used to describe a point in 3d space. Displayed objects are composed of arrays & vertices describe the location of these objects. Let's make it simpler. In geometry, vertex is a point that describes corners or intersections of geometric shape. Now in 3d space, every 3d object has geometry (obviously). So vertex shader does something to the objects. So what is that something? If a pixel shader computes properties of pixels such as color, lighting value, etc then a vertex shader does the same to objects, right? Wrong. The primary operations that can be done by vertex shaders on 3d objects are displacement & transformation. These manipulate object properties such as position, texture co ordinate & color but cannot create new vertices. Long story short, whenever we're dealing with any 3d object from buildings to bricks, Mr. Vertex shader is behind the scene.
Now take a look at the screen:
This scene requires pretty complex calculations on the shader department. Now can you tell which parts of the image is using pixel shader & vertex shader? It's really simple.What are the 3d objects? Let me see, the trees, the rocks, the distant mountains & the houses. So these definitely uses vertex shaders. Then what are the remaining parts? The (beautiful) water & the fog. These uses pixel shaders. But wait, it isn't over yet. What about the lighting effects? The lighting on the water is called subsurface scattering which again uses pixel shader. The shading on the trees is obviously done using pixel shaders, phew...ok move on.
In earlier generations of GPUs, there were separate number of Pixel & Vertex shaders built in the hardware and these were considered sufficient. Say in a graphics card there are 16 pixel shaders & 8 vertex shaders. Now consider this scene:
As you can see, there are very little task for the pixel shaders. They are only needed for filling the sky. But it's very stressful for the vertex shaders as all the destruction & burning objects require much higher polygonal (smallest unit in a 3d object) calculations. The highly detailed burning car alone takes more than half of the vertex shaders of the card. As a result, there will be slowdown & the scene will lag. In case of that card, out of 16 pixel shaders very little is being used, but the vertex shaders are being pushed to their limit & could certainly use some help. But the remaining, unused pixel shaders are showing the finger to them. This poses a computational problem as if any one of the shaders are over utilized, the whole performance graph can come crashing down.
Enter DirectX 10 (a graphics software library) which brought the concept of Unified Shader where each individual shader can perform both vertex & pixel shading operations. Problem solved. No more worrying about one part of the GPU being idle & the other part choking. No more seperate shaders. Newer GPU's all shipped with this unified architecture. The name of these newly improved shaders became Stream Processors. These are fully programmable, so if there are stream processors to spare, they can be used in other computational processes. Evidently, the unified stream processors are much more capable & versatile than their predecessors. As time went on, with newer fabrication processes, GPU's featured increasing number of SPs & as of this writing, higher end GPU's come equipped with triple figure SPs.
We're not done yet. DirectX 10 brought with it a whole kind of shader called Geometry Shader. It's functions are described below:
Geometry Shader: These can generate new graphics primitives (atomic or simplest geometric objects) such as points, lines etc. Geometry shaders are used to perform really complex operations such as Tesselation (multiplying the number of polygons on an object to make it look more realistic), creating heightened non symmetrical surfaces to mesh complexity modification. Geometry shaders are used to produce ultra realistic visuals like the one below----
If there are some questions need answering, sound off at the comment section.
Comments
Post a Comment