(160 comments, 87 posts)
This user hasn't shared any profile information
Home page: http://www.shtif.net
Posts by sHTiF
[Update] Added camera information
Hi guys, I know its been some time since my last blog post and I was also scolded by my lack of posting so here it goes.
Today I am going to talk about something which should be obvious for all Genome2D users but it seems that a lot of them don’t know about this feature and what is more its a feature that seem to attract non Genome2D users as well. I am talking about the ability to draw directly into the GPU without the need of any display list architecture in place. Its very similar to blitting and it may attract people that want to port their blitting engine to utilize GPU. And since there is no render graph overhead this approach is VERY fast as you are literally just pushing stuff onto GPU.
The initialization of Genome2D is the same and it doesn’t matter if you want to use render graph or just direct draw, you can even combine these two which use very usefull for drawing highly optimized large number of objects.
var config:GContextConfig = new GContextConfig(stage, new Rectangle(0,0,stage.stageWidth,stage.stageHeight));
// Initialize Genome2D
genome = Genome2D.getInstance();
In the initialization handler that is called upon the Genome2D/GPU initialization you should create your textures or even texture atlases as direct draw can work with both. So again no change here there is no special textures for direct draw calls versus the GSprite/GMovieClip ones. We will also need to hook up a handler to the rendering pipeline where we can do our draw calls.
// We will create a single texture from an embedded bitmap
texture = GTextureFactory.createFromEmbedded("texture", TexturePNG);
// Add a callback into the rendering pipeline
There are two global callbacks for rendering pipeline onPreRender and onPostRender where first is called before the node render graph is rendered and second one is called after. If we are going to use only direct draw calls it doesn’t matter which one we use.
Now in the render callback handler we can do our direct draw call
var context:GStage3DContext = genome.getContext();
context.draw(texture, 100, 100);
This will draw our texture at the position 100,100 simple as that there is no render graph involved just direct drawing into the GPU. There are also more options.
context.draw(texture, 100, 100, 2, 2, 0.5);
This will draw the same texture at 100,100 but scale 2,2 and rotated by 0.5 radians.
You can also modify the color.
context.draw(texture, 100, 100, 1, 1, 0, 0, 1, 1, .5);
This will render the texture at 100, 100 without scale or rotation but its red color will be multiplied by 0 and it will be at .5 alpha.
Additionally you can involve blend modes.
context.draw(texture, 100, 100, 1, 1, 0, 0, 1, 1, .5, GBlendMode.MULTIPLY);
Which will render the same as the previous block but with multiplication blendmode. And finally for advanced users you can even do direct draw calls with filters which offer default GPU shader override.
context.draw(texture, 100, 100, 1, 1, 0, 0, 1, 1, .5, GBlendMode.NORMAL, filter);
This way you can do pretty much anything GPU can on shader level.
Additionally there are further custom draw calls, as you can see the current example draw quads and whats more there is limited way to transform this quad (no skew for example). This is mainly due to performance reasons as most draw calls don’t need this additional transformations which involve additional data/calculation overhead. To solve such scenarios there is a low level call using raw matrix data
context.drawMatrix(texture, a, b, c, d, tx, ty, red, green, blue, alpha, blendMode, filter);
As you can see it offers all the additional modifications of the previous call but uses matrix for transformation to offer unlimited manipulation.
Next one is polygon draw which enables you to draw any shape all you need is its triangulated vertex data and corresponding UV coordinates.
context.drawPoly(texture, vertices, uvs, x, y, scaleX, scaleY, rotation, red, green, blue, alpha, blendMode, filter);
Keep in mind that you need to consider all the GPU sensitive flags that result in broken batching, such as alpha/nonalpha usage, blendmode change, filter change or different data draw call.
Finally for all those that don’t have enough there are draw from source low level draw calls where you can specify your source rectangle overriding the actual texture UV coordinates. This is handy especially for people that plan to port their blitting engines that often involve source->target blit.
context.drawSource(texture, sourceX, sourceY, sourceWidth, sourceHeight, x, y, scaleX, scaleY, rotation, red, green, blue, alpha, blendMode, filter);
Additionally there is drawMatrixSource which will come in the next build.
All these draw call also support custom camera through which they are rendered. To set camera in Genome2D its very simple.
var contextCamera:GContextCamera = new GContextCamera();
contextCamera.y = contextCamera.x = 100;
This will set context camera at position 100,100 for all consecutive draw calls, keep in mind that default camera is looking at the center of the stage so with stage 800×600 the default camera used by Genome2D is x = 400 y = 300
All draw calls also support custom rectangle masking similarly in state machine fashion you are able to set axis aligned masking rectangle for all consecutive draw calls like this.
This will set to discard anything rendered outside the 0,0,400,300 rectangle. However the forementoned setCamera method will override this mask rectangle to default camera viewport so you need to call custom rectangle explicitly after the setCamera call.
So guys if this is something you didn’t know about or were looking for in other Stage3D frameworks just check it out. Latest Haxe build can be found here http://build.genome2d.com/haxe If there are any questions feel free to ask.
[UPDATE 22.8.2013] Ok guys the reason for different and somewhat low benchmark numbers on iOS for both Genome2D and Starling were because I was using incorrect build. Now its remedied and all the benchmarks were rerun accross all devices for both Genome2D and Starling
Ok guys, I’ve finally had time benchmarking newest Starling v1.4 against latest Genome2Dhx version. So lets jump right to it since I am going off to vacation tomorrow it will be divided into two parts where sources come in second part after vacation since I need to polish them up.
All of the results here are averaged from 5 or more measurements.
Standard benchmark targeting 60FPS with the step of 50 also assets scaled at 100%. Using AS3 and latest swc of both frameworks.
I recreated standard Starling bechmark to be renderable by Genome2D as well to compare performances. I also changed background to use BlendMode NONE in both frameworks to avoid unnecessary fillrate issue on low end devices as well. I used the fastest way to render rotating objects, in Starling it should be using Images in Genome2D hx it is done by using custom class for transforms and then rendering with draw call. For those familiar with Genome2D nope you should not use GNodes for fastest rendering, for example particle systems are single node with smaller faster transform instances rendering each particle.
The web benchmark will automatically show you if you are using PPAPI and Debugger, all the measurements were done using release player.
Run browser benchmark: HERE
First up is Chrome where we actually have to benchmark two different Flash Players since there is the Adobe one obviously and then there is the Google PPAPI one.
As you can see there are major differences in performance between Starling and Genome2D, most of you already run this benchmark when I posted it week ago. So Genome2D is around 8 times faster than Starling in Adobe Flash Player in Chrome. Its really major difference.
As for Google’s PPAPI the difference is not that huge, but with all the PPAPI bugs I wouldn’t recommend anyone to use PPAPI except on Linux and its easy to force players to install Adobe’s player with JS. Interesting part is that Starling is actually faster in PPAPI than in NPAPI, also the main drop in performance for Genome2D is PPAPI specific pipeline for OpenGL calls which result in worse performance specifically when using Vertex Shader batching. You can get better results in Genome2D using Vertex Buffer batching (which can be easily enabled) when targeting PPAPI. I didn’t include this in these benchmarks since I wanted to use single pipeline bechmark for Starling/Genome2D through the whole process.
Next up is another browser benchmark and that is Firefox. As we can see Firefox is even faster than Chrome.
We will use slightly modified benchmark for mobile testing. As proposed by Daniel for starling as well here: Starling 1.4 benchmarks So we are targeting 30 FPS instead of 60 that we did on desktop. Also all the objects are scaled down to .25 to minimize fillrate impact on framework benchmarking. Daniel also suggested to enable mipmaps but in my tests it was clear that this does not have any impact on performance so we are not going to generate mipmaps, another thing is that step size is 10 for mobiles instead of 50 to increase the number of objects in slower pace.
First Android devices.
Samsung Galaxy Tab (first version)
Quite old tablet one of the first Android tablets if my memory serves me correctly. And the only mobile device where the difference between frameworks speeds is less than 200% maybe we are hitting fillrate wall even with scaled down assets.
Samsung Galaxy S2 although a bit dated phone nowadays still with incredible performance, actually I would say it outperforms most of today’s mobiles with ease. Genome2D is more than three times faster here, it actually can render more objects at 30FPS on mobile than Starling can at 60FPS on my i7 (GTX660) desktop. ;)
Nexus 7 tablet with latest Android 4.3, this is actually one of the mobile devices Daniel benchmarked in his latest benchmarks, the funny thing is that I am getting higher numbers for Starling than he did for some reason. Genome2D is still over three times faster here.
Using ad-hoc build.
Genome2D is 4 times faster than Starling in this instance.
No suprise here since iPad 2 is same hardware as iPad Mini.
Twice as fast as iPad Mini and iPad2 both frameworks scale almost the same.
I was expecting lesser difference here since its really old device and was expecting fillrate bottleneck even with scaled assets. Genome2D is still around three times faster.
So thats all folks for now due to time restrictions, after vacation I will post sources and maybe some additional information.
Cheers and any feedback is welcome.
Hi there guys, some of you guys that are not inside our small awesome community may be curious if there is something going on with Genome2D. Everything is going on smoothly and I am working on it almost daily there is just no time to blog. So today I decided to share two of my work in progress Genome2D experiments.
First is a support for Spriter format, some of you already know Spriter (http://www.brashmonkey.com/spriter.htm) its an upcoming awesome tool for 2D animation, for those that are not familiar with it you should definitely check it out.
It already supports interpolation/tweening in the movement and bones support is coming next just waiting for the upcoming beta build of Spriter.
Another experiment I’ve been working on are stencil shadows. This involves additional shaders, low level draws, materials and components so its quite a major addition and I bet all of you will enjoy it. Here is a demo which is a clone of my old FlashPlayer 9 version of Genome2D demo.
Also the new Genome2D forum is up and we should all move there. I will not move the current forum db there as it would be tedious and most of the information isn’t that valuable anymore anyway. I am looking for our most experienced Genome2D users to start the new forum up :P
Thats all folks as usual due to time constraints, I am going to Venice and next week I am in Prague for the Geewa hackaton once back I will dive again into the Genome2D. Cheers.
Hi there guys don’t worry I still work relentlessly on Genome2D but as some of you know this time before Christmas is always the most hectic one. We’ve been working on an arkanoid game called Robie at our company Viral which has been successfully released today on Facebook finally :) The road is far from over but I am glad I can share our first official version. It is using latest Genome2D and Luca’s awesome Nape 2.0 with CCD which I’ve been betatesting for some time.
So enjoy guys and let me know what you think. Once Christmas are finally here I will have time to wrap up the Genome2D 0.9.3 official release.