Posts tagged 2d
Hi there guys, some of you guys that are not inside our small awesome community may be curious if there is something going on with Genome2D. Everything is going on smoothly and I am working on it almost daily there is just no time to blog. So today I decided to share two of my work in progress Genome2D experiments.
First is a support for Spriter format, some of you already know Spriter (http://www.brashmonkey.com/spriter.htm) its an upcoming awesome tool for 2D animation, for those that are not familiar with it you should definitely check it out.
It already supports interpolation/tweening in the movement and bones support is coming next just waiting for the upcoming beta build of Spriter.
Another experiment I’ve been working on are stencil shadows. This involves additional shaders, low level draws, materials and components so its quite a major addition and I bet all of you will enjoy it. Here is a demo which is a clone of my old FlashPlayer 9 version of Genome2D demo.
Also the new Genome2D forum is up and we should all move there. I will not move the current forum db there as it would be tedious and most of the information isn’t that valuable anymore anyway. I am looking for our most experienced Genome2D users to start the new forum up :P
Thats all folks as usual due to time constraints, I am going to Venice and next week I am in Prague for the Geewa hackaton once back I will dive again into the Genome2D. Cheers.
Postprocessing is finally here, it combines already implemented features in Genome2D and turns them into something even more powerful. Any node can have a postprocess attached what it does is render this node and all its children through special rendering pipeline that involves multiple passes through render to texture where each pass may have one or more filters attached. This way we can achieve effects that are not achievable by filters only, for example blur which involves multiple passes not to mention we need to go outside of sprite texture boundaries to render blur correctly. There are even more complex composite postprocesses like HDR or Bloom that involve multiple other postprocesses.
Any filters may be used in postprocessing, there were multiple filters added where some are specifically designed to work with postprocesses and should not be used as standalone filters although if you want to use them nothing is stoping you except the limitations that you are rendering it within the sprite texture only.
This is one of the biggest updates there were as postprocessing and filters introduced over 20 new classes so I decided to bump up the version to 0.9.3. Yep I know there never was an official 0.9.2 release but this has more to do with my way of SVN versioning so I am letting you know there will be no official release of 0.9.2 version and we will jump from 0.9.1 to 0.9.3 once everything is finished. You can always grab the 0.9.2 or even the new 0.9.3 nightlies that are coming if you want to test and play with it but let me warn you that it is quite complicated to get started without any documentation.
Postprocessing will also work with the existing pipeline so masking, cameras and other stuff should work seamlesly. The only limitation is that you can’t have hierarchical postprocessing, so you can’t have a children with postprocess one and their parent with postprocess two that would affect them. This has to do with render to texture limitations so its not doable and no framework will have it. I did discover a workaround but it adds so much overhead that I decided to not implement it, but never say never ;)
I was thinking what to post as an example and at the end I decided to post a scene from my upcoming Genome2D showcase demo, it actually uses the new particles systems with tweenable forcefield as well. Enjoy, and as usual it uses one of my own photos:
If anyone has an idea for a cool postprocess I am up for it, the demo shows just a few examples as you can combine pretty much anything :)
There is one more feature before the new Genome2D version release, the already mentioned texture packers. After that I will clean up the code and start working on the standalone examples. I can’t give you an exact time as there will be a lot of work outside of Genome2D for me this upcoming month so keep your fingers crossed and as usual any feedback is welcome. I am glad to see more people active on the forums as well.
Hi there guys, just a quick post about yet another new feature that will be added into Genome2D. I am not sure if it will come with the next version or not as there are few issues to sort out but its definitely coming.
Vector textures, yep some of you may ask whats that as you can’t upload vectors to GPU as textures. You are correct, the title is not exact, its a method where you can render vectors on GPU using bitmap textures. Its quite a hack using distance field preprocessing and bilinear filtration of the textures on pixel shader coupled with alpha testing. The billinear filtration also means we can use pretty small distance field representation to preprocess large vectors depending on the amount of detail we want to achieve. However there are limitations, for example this approach has problem with sharp corners or in cases where edges are too close. There are ways to improve this method as well each with its own pros/cons. Personally I think its awesome ;)
Ok enough talk here is a demo with 128×128 vector texture in action:
I am still on a roll so hopefully it will last as long as possible :) I understand that I should focus on tutorials and documentation and it is coming, but I simply don’t want to waste all these ideas that are floating in my mind and refocus to documentation instead of putting im into the Genome2D as fast as I can :) Once I feel burned out when it comes to actual coding there will be plenty of time for the documentation.
Any feedback is welcome as usual.
Hi there guys I am back from Berlin and rocking again. Today its once again a highly requested feature. Welcome the FILTERS!
However the implementation of filters in Genome2D will be different than in native flash. As there is very distinct difference between implementations of various native filters I decided to actually separate them to two categories. One category which I am currently presenting here can basically be called color filters. Those are filters that don’t require complex postprocessing nor do they require multiple passes to render therefore they are really fast and there is almost no impact on performance, they don’t involve any additional passes, draw calls or render to texture operations. So with the exception of low end mobiles where any additional pixel shader instruction adds noticeable overhead the rendering runs at the same speed with them as without them.
Second category that I am working on and will come later will be called post processors, those are native filters that can’t be rendered in single pass or involve heavy pixel shaders, multitexturing etc. Typical examples are blur, shadow, glow. More on this topic later.
There are currently three filters GInvertFilter, GDesaturateFilter and GColorMatrixFilter all of them are pretty selfexplanatory. I am aware that you can actually do inversion as well as desaturation with color matrix but those two filters are optimized to do just that one simple thing so they will be faster, especially on low end mobile. When it comes to color matrix filter it does exactly what the native one, you can achieve pretty much any color manipulation there is.
Here is a simple example of all the filters in action, I added only the most popular parameters to modify the color matrix filter here as adding all possible options would involve tons of ui. There are no limits to human imagination though.
As some of you know my other hobby is photography therefore the photo used is one of my own, thanks go to beautiful model Lenka ;)
Enjoy and as usual any feedback is welcome and very appreciated, nightly build coming this week.