Posts tagged 2d

Texture packing to color channels.


Its been some time I’ve posted some research post so I decided to share some optimization approaches I play with mostly using Genome2D but it can be applied to any GPU framework/platform/language.

I am often hired as an external specialist to optimize various in-house engines and not too long once again on such a mission I was tasked to optimize a 2D GPU engine. The difference was there wasn’t problem with speed in this case but rather limited GPU resources to store all the texture data. I can’t go into much detail about the engine or the project involved but they basically had too much often quite large simple textures. First that came to my mind was lets move it to vectors as its way less of a memory hog than textures. An example that cames to mind is Tiny Thief as they had the exact same problem with the amount of graphics on the screen there was simply no way they could’ve store it in the GPU memory. To cut it short this idea was declined for one reason or other, most probably asset creation. So back to the drawing board, again focusing on the simplicity of the textures and after few tests I came with an idea how to essentially pack 4 textures into a single one while utilizing a simple pixel shader to just draw the correct texture of the four when rendering to the screen. Again performance here wasn’t an issue so we could allow ourselves a bit heavier pixel shader. It is often true that on high end mobile devices you are more likely going to run into memory problems than performance problems.

Enough of my chit chat, lets explain my approach how I packed textures to separate channels which allowed me to load 4 times the amount of assets as I would be able normally.

A little bit of warning here, the technique and algorithms explained here are not always the most optimized as I want to focus on simplicity of the example code here instead of explaining nuances of optimization. I am going to use framework agnostic code just to illustrate what is going on.


First we are going to extract palette information, this should be done externally and I wrote a little tool for them which I can’t give away but its pretty simple. You can also do it at runtime but that is a huge waste. So what we are going to do is go over all our textures we want to extract their pixel colors out.

  1. for (i in 0texturesCount) {
  2.    colors = new Array<Int>();
  3.    for (x in textures[i].width) {
  4.       for (y in textures[i].height) {
  5.          color = textures[i].getColorAt(textures[i].x, texture[i].y);
  6.          index = colors.indexOf(color);
  7.          if (index==-1) {
  8.             index = colors.length;
  9.             colorTexture.setColorAt(colors.length,i,color);
  10.             colors.push(color);        
  11.          }
  12.          textures[i].setColorAt(x,y,index);
  13.       }
  14.    }
  15. }

So what are we doing here is basically create a palette from the textures and replacing the actual colors with their index in the palette. Yes we will end up with bunch of textures of the same size as our original ones just color replaced with indices and a new texture with the palette where first line is the palette of first texture, second line is palette of second texture and so on. So most of you that didn’t get lost also spotted the limitations and where am I going with this. And its why I said it came to my mind once I did various tests on the textures in the project, what I discovered is that all of those textures use up to 256 colors, no single texture had more than 256 colors thats why I decided to pack these in color channels. If we had just grayscale textures for example it is obvious we could pack them nicely but since our textures use various colors thats where the palette comes in.

Palette can store any colors as long as single texture doesn’t contain more than 256 colors and even this limitations is applicable if we are going to use single color channel to store the palette index. So now our textures can have any colors in comparison to just using grayscale but whats more we are not limited to the 256 colors for all the textures as each texture can have entirely different 256 colors, yep within the same atlas as we will see.


So now that we have all these new textures we need to pack them, but we are not going to pack them into an atlas of the same size. So lets say we were able to pack all our textures into a 2048×2048 atlas now since we are going to use 4 channels we are going to pack them into 4 512×512 atlases.

Nope we can’t just splice the 2048×2048 atlas 4 times as we need for a single texture to be in a single color channel and if we just spliced it we could potentionally splice a subtexture inside.

  1. for (i in 04) {
  2.    packer = new Packer(512,512);
  3.    var j:Int = 0;
  4.    while (j&lt;textures.length) {
  5.       texture = textures[j];
  6.       if (packer.pack(texture)) {
  7.          datas.remove(data);
  8.       } else {
  9.          j++;
  10.       }
  11.    }
  12.    packers.push(packer);
  13. }

Again a simple example how the packing would go, we create a packer for 512×512 then try to pack as many textures into it once we can’t anymore we pack to second one and so on.

Now we have 4 512×512 textures and here comes the channel packing, we are going to create a single 512×512 texture that contains all of those 4. We take the first one and since it contains only values from 0 to 255 we write it to Red channel of the new one. Then we take second one and write it to the Green channel, then Blue and finally Alpha. Now we created a 512×512 texture that contains 2048×2048 atlas essentially. This a simple example from my demo at the end:

It is without packing to the alpha channel as you would see even less and this illustrates it better, its quite a noise but don’t worry you will get your real textures out of it.

These two steps mentioned should definitely be preprocessed in your project instead of generating them at runtime which can be obscenely costly.


Now that we have our assets ready we just need to render them. What we need for each sprite is to tell GPU that is the pallete index for its texture (line in the palette) and also color mask to eliminate the data in other channels. So 5 additional bytes RGBA + index, I am not going to write code here as this is very specific to your language/platform and even on how you upload your data to GPU but I think its pretty selfexplanatory.

Once you have this data on the GPU just get it to pixel shader, I assume you movie it to vertex shader first as you are probably batching in some way instead of single draw call otherwise you can send it to pixel shader directly. And now comes the shader magic. I am going to use AGAL here as its pretty selfexplanatory.

  1. tex ft0, v0, fs0 <2d,clamp,nearest>
  2. dp4 ft0.x, ft0, v3    
  3. mov ft0.y, v2.x              
  4. tex ft2, ft0, fs1 <2d,clamp,nearest>  
  5. mov oc, ft2

First we sample the atlas texture where the palette indices are stored now. Then we use a dot product with the color mask which will eliminate all the channels except the channel where our index is for this texture and store it to as the U coordinate for the UV lookup. Secondly we move the texture index we sent ourselves to the V coordinate for the UV lookup. Now we use these new UVs too lookup in our palette texture for the correct color output for this pixel and finally render it.

If we just need grayscale packing without palette lookup all we would do is get the color dot product it with the color mask and output it.


Here is a working example, even though I could just use grayscale packing here, its to showcase the palette index packing, I pack a bitmap font to 4 channels and then render thousands of them (SPACE to enable motion). Top left of the screen is the packed texture its almost invisible due to alpha channel being used for packing as well obviously. Second texture to the right is the palette texture and the one on top right is the original unpacked bitmap font.

Example HERE

If you are interested in a video about this check out my Genome2D DevCast #3 at youtube HERE

Genome2D experiments Spriter/StencilShadows


Hi there guys, some of you guys that are not inside our small awesome community may be curious if there is something going on with Genome2D. Everything is going on smoothly and I am working on it almost daily there is just no time to blog. So today I decided to share two of my work in progress Genome2D experiments.

First is a support for Spriter format, some of you already know Spriter ( its an upcoming awesome tool for 2D animation, for those that are not familiar with it you should definitely check it out.

It already supports interpolation/tweening in the movement and bones support is coming next just waiting for the upcoming beta build of Spriter.

Another experiment I’ve been working on are stencil shadows. This involves additional shaders, low level draws, materials and components so its quite a major addition and I bet all of you will enjoy it. Here is a demo which is a clone of my old FlashPlayer 9 version of Genome2D demo.

Also the new Genome2D forum is up and we should all move there. I will not move the current forum db there as it would be tedious and most of the information isn’t that valuable anymore anyway. I am looking for our most experienced Genome2D users to start the new forum up :P

Thats all folks as usual due to time constraints, I am going to Venice and next week I am in Prague for the Geewa hackaton once back I will dive again into the Genome2D. Cheers.



Hi guys, just wanted to wish all of you hepp new year 2013 and here is a little fireworks demo. enjoy.
(Press F for FULLSCREEN)

You can find the source code on GitHub here as well.

Genome2D postprocessing


Postprocessing is finally here, it combines already implemented features in Genome2D and turns them into something even more powerful. Any node can have a postprocess attached what it does is render this node and all its children through special rendering pipeline that involves multiple passes through render to texture where each pass may have one or more filters attached. This way we can achieve effects that are not achievable by filters only, for example blur which involves multiple passes not to mention we need to go outside of sprite texture boundaries to render blur correctly. There are even more complex composite postprocesses like HDR or Bloom that involve multiple other postprocesses.

Any filters may be used in postprocessing, there were multiple filters added where some are specifically designed to work with postprocesses and should not be used as standalone filters although if you want to use them nothing is stoping you except the limitations that you are rendering it within the sprite texture only.

This is one of the biggest updates there were as postprocessing and filters introduced over 20 new classes so I decided to bump up the version to 0.9.3. Yep I know there never was an official 0.9.2 release but this has more to do with my way of SVN versioning so I am letting you know there will be no official release of 0.9.2 version and we will jump from 0.9.1 to 0.9.3 once everything is finished. You can always grab the 0.9.2 or even the new 0.9.3 nightlies that are coming if you want to test and play with it but let me warn you that it is quite complicated to get started without any documentation.

Postprocessing will also work with the existing pipeline so masking, cameras and other stuff should work seamlesly. The only limitation is that you can’t have hierarchical postprocessing, so you can’t have a children with postprocess one and their parent with postprocess two that would affect them. This has to do with render to texture limitations so its not doable and no framework will have it. I did discover a workaround but it adds so much overhead that I decided to not implement it, but never say never ;)

I was thinking what to post as an example and at the end I decided to post a scene from my upcoming Genome2D showcase demo, it actually uses the new particles systems with tweenable forcefield as well. Enjoy, and as usual it uses one of my own photos:

If anyone has an idea for a cool postprocess I am up for it, the demo shows just a few examples as you can combine pretty much anything :)

There is one more feature before the new Genome2D version release, the already mentioned texture packers. After that I will clean up the code and start working on the standalone examples. I can’t give you an exact time as there will be a lot of work outside of Genome2D for me this upcoming month so keep your fingers crossed and as usual any feedback is welcome. I am glad to see more people active on the forums as well.

Genome2D Vector textures


Hi there guys, just a quick post about yet another new feature that will be added into Genome2D. I am not sure if it will come with the next version or not as there are few issues to sort out but its definitely coming.

Vector textures, yep some of you may ask whats that as you can’t upload vectors to GPU as textures. You are correct, the title is not exact, its a method where you can render vectors on GPU using bitmap textures. Its quite a hack using distance field preprocessing and bilinear filtration of the textures on pixel shader coupled with alpha testing. The billinear filtration also means we can use pretty small distance field representation to preprocess large vectors depending on the amount of detail we want to achieve. However there are limitations, for example this approach has problem with sharp corners or in cases where edges are too close. There are ways to improve this method as well each with its own pros/cons. Personally I think its awesome ;)

Ok enough talk here is a demo with 128×128 vector texture in action:

I am still on a roll so hopefully it will last as long as possible :) I understand that I should focus on tutorials and documentation and it is coming, but I simply don’t want to waste all these ideas that are floating in my mind and refocus to documentation instead of putting im into the Genome2D as fast as I can :) Once I feel burned out when it comes to actual coding there will be plenty of time for the documentation.

Any feedback is welcome as usual.

Go to Top