Notes of a more technical nature on the demo Flu.

All parts use screenmodes of 320*256 pixels (at different
numbers of bits per pixel) and use triple screen buffering
to minimise time wasted waiting for the V-sync.
It seems to run faster on my parents computer than mine,
perhaps because they have a higher refresh rate on their
320*256 modes. Another alternative is that the module which
caches the VRAM gives a significant speed increase.


Flu Flash
(256 colours)
I decided to use a cellular automaton to make this effect
work. It may not be the most efficient way, but it is
fast enough and fairly small, code-wise.


Loss
(256 colours)
An animation, featuring a 512 translucent particle display.
This method seems to give quite realistic fire. Although
it is more processor intensive than the other method I've
seen, I think it shows promise.
The sprite is compressed using a modified run-length
encoding system.


Reflection
(256 greys)
The ripples are simulated using a 160*128 grid. I rely
on tan(a) approximating a for small values to avoid two
lookups per pixel.
The moon sprite is compressed by delta scanning,
run-length encoding, and when all else failed cutting
out the 2 least significant bits. At over 3k it still takes up
a significant amount of the space available.

The sprite was originally a 109k GIF I found on the internet,
I had to scale it down using ChangeFSI before use.

The wave simulator is based on that described in the help
to the excellent program "2DWaves" by Jan Vlietinck,
available on the Kosovo CD (2nd Ed.) :
ArchimedesWorld_CD1.Old_Disks.Graphics.2DWaves.
and presumably on ArcWorld CD 1.
It is well worth getting this as it fairly toasts on SA. :-)


Fish
(32 thousand colours)
Coded by Alain Brobecker, this is a rotozoom with edges
as quadratic splines and an RGB trainee. The 64*64 pixel
Fishy tile had to be heavily compressed, using
interpolation and delta scanning. This gave some image
degradation which was especially noticable on the eyes,
so I patched these to fix the problem.


Colourbobs
(16 million colours)
Yet another meta-blobs routine.
A novel aspect is its use of colour, which necessitates
a two pass algorithm. In essence for each point I sum the
colours then scale back all colours in proportion using
a small LUT.



Compression
As you might have noticed, I have compressed the data I
use heavily using custom routines. This is not enough to
fit them all in 8k however. Squash ignores the resultant
executable, obviously it is too dense for it already.
However, I do have over 4k of code in there, so I decided
to try and squash that. I devised an algorithm which
would pick the most similar instruction from those before
(using nibble values) to patch and give the right value. It
starts off with the decompression code loop as data.
The format is:
1byte which nibbles are to be replaced
1byte which instruction to use as the base (n-1 to n-256)
and seperately
nibbles, EOR the value in the instruction before.

If the word cannot be compressed without only 1 or 2
preserved nibbles I remove the 'base' byte and flag all
nibbles to be replaced.

This means that the best compression would give a 50%
reduction. Every word would be found in the 256
instructions below it. At worst, the compression ratio
would be 5 bytes for every word. In practice, I found
a compression ratio of around 75% for code, and
some compression of some of my datas as well.

I have not optimised the routine quite to completion;
there are some optimisations I'd like to make but have
not had the time for (modifying code in this manner is
a hairy technique).

I think a nice 2k utility would be able to analyse a
program for areas which didn't compress well,
generating a near-optimal decompression routine
which would beat squash compression. However
I doubt I'll have time to make it in the near future so
if anyone else wants to have a go please feel free
(However, I'd love to help).

Tony Haines 29/4/2000
