Stardust project homepage
(example source can be found on the project homepage)
You heard it. Stardust Particle Engine now supports Flare3D, a brand new 3D engine just released a couple of days ago. Awesome!
Flare3D includes a 3DS Max exporter that can export much, much more than just mesh data, such as complex materials (environment map, texture map, etc), animations, and bone animations. For the first example, I exported the star model and gold material entirely in 3DS Max, and the data can be correctly rendered in Flare, without writing a single line of code. The Flare3D team really has the love for 3DS Max users :)
The Flare3D extension for Stardust mainly consists of three initializers,
Flare3DSprite3D, and two renderers,
Flare3DPivot3DClone initializers work pretty much like the original
DisplayObjectClass initializer, only that they assign
Pivot3D objects from Flare3D to the
Particle.target property, and they have to be used along with the
Flare3DPivot3DRenderer. The first initializer creates new
Pivot3D objects from a reference to a
Pivot3D subclass, while the second initializer creates
Pivot3D objects by cloning an existing one. The
Flare3DSprite3D initializer creates 3D sprites and has to be used with the
I hope you enjoy playing Stardust with Flare3D :)
Tuesday, May 25, 2010
Friday, May 21, 2010
Stardust project homepage
The initial release of Stardust Particle Engine version 1.1 is now available (yep, no more betas). The major difference of the current revision from the previous one is the use of fast-splicing arrays (I made up this name, because I don't know if there's a formal or correct name for it).
Before Stardust employs linked-lists as default internal particle collections, ordinary arrays are used. Arrays are known to be fast to traverse and sort; however, arrays are also notoriously known to be ultra-slow when it comes to splicing large arrays. This is why I switched to linked-lists, which can perform much faster splicing operations by simple node-link manipulation.
I've come back to arrays and use a new method to perform splicing, and this turns the tables again. Now Stardust uses fast-splicing arrays as internal particle collections.
So, what are fast-splicing arrays anyway? Well, they're are no different from ordinary arrays except that they perform splicing quite differently. As any programmer familiar with ordinary array splicing knows, ordinary splicing first creates a new array of the size one-cell-smaller than the original one, and then copies the original elements, except for the spliced out one, into the new array. This is very CPU-consuming when dealing with large arrays, since a new large array is created every time a large array is to be spliced, not to mention the fact that rapid splicing is a fundamental nature of particle engines.
The figure below illustrates the splicing process for ordinary arrays.
Fast-splicing arrays do the job differently. Each of them always keeps the last cell empty, to allow iterators to tell whether they have reached the "tail" of the array, and it double its size when the last empty cell is to be used. Also, each array keeps track of the index of the last occupied cell. When a particle is to be removed from a cell, the last particle is then moved to the now-empty cell, and then the index indicator is decremented, i.e. moved left. No new arrays are ever created for splicing.
The order of the array has been disrupted you say? As long as no mutual action is in place, the order of the array is of no significance. Even if mutual actions are involved, requiring the particles to be sorted according to their x-coordinates, the emitter sorts the array automatically before further processing in this case, so the order is still not problem.
This splicing approach is even faster than linked-list splicing, since it only involves two assignment operations and one integer decrement operation, as opposed to four assignment operations for linked-lists (or maybe more...argh, you do the math).
Moreover, arrays are faster for adding particles, because they simply assign references to their cells, and, on the other hand, linked-lists have to create new nodes to hold references to particles added and subsequent nodes.
I think I don't have to say anything more about the sorting operations. Arrays are the fastest, 'nuff said.
Here's a table that compares the performance of particle adding, traversal, splicing, and sorting operations for different particle containers. I believe that you can clearly understand why I prefer fast-splicing arrays over linked-lists.
Wednesday, May 19, 2010
Monica WINS by ~cjcat2266 on deviantART
Play Monica: Nightmare
It's been too long since I got the third place in the Creative Game Design Contest held by Gamer. Today the finally sent me the certificate for the award. I felt like drawing Monica being excited about the certificate, so here it is :p
Tuesday, May 11, 2010
ZedBox project homepage
I'm happily informed by Milkmidi from Medialand that Mark Vann, also from Medialand, has built a Heineken website using ZedBox, my 2.5D billboard engine.
I've never imagined I could use ZedBox to create such stunning visual experience with massive cans of Heineken beer, although at some points the website does consume a lot of CPU resource. I think the performance could be improved by caching, or pre-rendering, the bitmaps, accounting for their different rotation angles, instead of changing their rotations directly at run-time. Anyway, Mark's definitely done a pretty darn good job on this one.
Nice job, Mark. And thanks a lot for supporting ZedBox :)
I've seen a lot of most-wanted-Flash-features articles. So I decided that I might as well write one :p
Full GPU Rendering Support
Currently, Flash Player 10.1 only supports hardware acceleration for the full-screen feature and video playback. I've always wanted to see Flash Player being capable of rendering all the display objects with full GPU acceleration. It'll be a huge boost on the performance. I believe lots of people out there also want this feature very badly.
ActionScript is the only one among the object-oriented languages I use that does not support function overloading. This can sometimes lead to frustrating experiences while designing frameworks. Sometimes I just want to provide functions with different sets of parameters that do the same job; however, due to Flash's lack of support for function overloading, I have no choice but to use lots of optional wild-card-typed parameters and depend on their types to perform differently in a single function.
Complete Generics Support
Vector class supports generics. However, I'm talking about COMPLETE generics support. That is, developers are able to write classes with whatever class templates they want, instead of only playing with the little
Vector class provided by Adobe.
It is so tempting to write classes with generics. Just imagine if you could write a following generic class:
There'll be no more type casting issues, and the following code is definitely legal.
Data Container Framework
What naturally comes after the generics support shall be a data container framework like that in Java. I hope Adobe can provide a set of containers of basic data structures, such as linked lists, stacks, hash maps, hash sets, etc. Currently the
Dictionary class acts like a object-to-object hash map, and is not strongly typed, which requires lots of casting being done. With a generic data container framework, we are able to write code like this:
And it's guaranteed to be type-safe, so you don't have to cast the evaluated value of
I like the Java container framework very much, especially the support for iterators. Iterators make traversing abstract containers even extremely easy and intuitive. I hope Adobe do consider adding a container framework seriously. I know there are already some neat container frameworks out there, like as3ds, but it's always nice to have official support, isn't it?
That's all I've got for now. I'm sure I'll come up with more wanted Flash features.
Friday, May 7, 2010
This is a sequel to my previous tutorial, Thinking in Commands part 1 and part 2.
This time I talked about how to encapsulate external data loading functionalities into commands, so that they can be arranged together with other commands. Also, I've introduced the
DataManager class, which essentially acts like a global variable aggregator. However, as Rich pointed out in the comment. This class is not type-safe, thus requiring type casting through the entire application. Actually, I also think this could be a problem for large applications. When I was writing the class, I was thinking of the
Proxy class from PureMVC. The class also acts like a global variable aggregator that is not type-safe, since this is how you obtain a reference to a custom proxy object.
Not type-safe as it is, it actually served me quite well in most applications. But yet I agree with Rich that it's not friendly in large applications. Unfortunately, this is the best solution I can come up with right now. I'll see if I can conceive something better in handling data.
Saturday, May 1, 2010
Yup, I'm currently working on a scripting engine, named Shintaz, which is the name of another character in Rusher's series. You can check out the complete character lineup here.
The motivation is twofold: first, it is my term project for the compiler course I'm taking this semester; second, my ultimate purpose is to integrate this scripting engine with Rusher Game Framework, so that you may open a console, like those where you can enter cheat codes in many Valve games, and enter scripts. This is mostly for testing and debugging purposes.
The project SVN is already opened on Google Code. Here's the source folder and the documentation. I use the ASUnit framework for unit testing Shintaz. You can check out the testing project here.
The scripting engine is still a work in progress. So far, I've finished the scanner. There's still a long way to go, with the parser and virtual machine to be finished.
Here I'll show some code snippets for the scanner.
This is how a scanner is initialized. The parameter passed to the constructor is a string of script, or code.
Tokens of the code can be obtained by repeatedly calling the
IScanner.getToken() method until the method returns null. Each token is consisted of a token type, an integer value, and a token value, whose data type depends of the token type. Here's what a piece of actual scanner code would look like.
Token.toString() method is overridden to show both the token type and token value. This is what will be displayed on the output panel.
That's it. This is my current progress on Shintaz. I'll begin to work on the parser as soon as I can. Hopefully, the entire engine can be completed by the end of this month.