[Kstars-devel] UI For Memory Management

James Bowlin bowlin at mindspring.com
Sat Jul 19 20:07:41 CEST 2008


On Sat July 19 2008, Akarsh Simha wrote:
> Should the slider be replaced by the Spin Box for the faint mag when
> zoomed out that we had earlier? I don't like the Spin Box because the
> numbers wouldn't make sense if we were labelling the field as 'Star
> Density'.
>
> Sliders, AFAIK, can handle only integers and integer steps. This
> means that I will need to map the numbers 1 to 10 (or whatever) from
> the slider to a magnitude range, by playing around with James map
> from star density to magnitude limit, so that they actually
> correspond to linear changes in star density. I hope this will be
> acceptable to everyone.

I agree that a slider is preferable to a spinner.  If you can create
a slider that goes from 0 to 100 (or whatever) then it would be trivial
to convert it to zoomed out magnitudes that go from 0.0 to 10.0 by 
dividing by 10.0.

> Any suggestions on the math behind the memory usage estimates? If it
> is going to be 10 values from a slider, it might be possible to
> hardcode values, and these hardcoded values will be extremely
> accurate. I think I should avoid hardcoded values (other than the
> "basic" memory usage), lest we change things later. So I plan to work
> out the memory usage by finding the average number of stars in the
> FOV for the given magnitude limit, and then multiplying this by the
> ratio of maximum stars per trixel to the average stars per trixel, so
> we make an approximate guess of the maximum memory usage (so that the
> user is safe). Of course, this assumes that he has the Tycho-2 deep
> star catalog. Maybe I should implement a way of checking this.

The memory usage is a nasty problem partly because the sizes of the
HTM trixels vary so much.  I think we will eventually want to use a 
different spatial index that doesn't not suffer from this problem.
IMO the tricky part is figuring out the number of visible trixels.
I've already posted a formula for the worst case number of trixels for
a given aperture radius but unfortunately, it can be 3 times larger than 
the best case.

I think the easiest thing to do is to use a rough estimate for the
number of visible trixels based on the average area of a trixel and
making a rough estimate of how much of "extra" area is covered around
the edge of the aperture.

Given an aperture of radius R, I made a worst case estimate by 
constructing an aperture with a larger radius (R + L) where L is length 
of one side of a trixel.  For the worst case, I assumed the annulus 
between R and R + L was completely covered, but for the average case, I 
suggest you assume that only a fraction of the area of the annulus is 
covered by visible trixels, maybe half would be a good first stab.

At some point you may want to simply collect empirical statistics,
such as recording worst case # of starblocks as a function of "star 
density" and FOV and then use these data to "hard code" a memory usage
formula.

Again, this is a nasty problem, you probably want a simple solution that
will at least give the user a rough idea of memory usage even if it is
not very accurate (i.e. off by a factor of two or more in some cases).  

It is possible that the star density "safety" control I was suggesting
earlier will end up being an easier solution than predicting the memory
usage.  I put a (very) arbitrary limit of 9.0 on the star density 
setting available via the mouse-wheel.  If we don't put some kind of
a limit on the density obtainable via the mouse-wheel then it would
be all too easy for a user to just spin the mouse wheel and eat
up gobs of memory (assuming a very large catalog).  This problem
is exasperated by the fact that it can take a very long time (seconds)
to draw the sky when it has a lot of stars.

One possible solution would be to *rely* on the mouse-wheel (and - and =
keys) to adjust the density and make the slider/spinner in the config
menu control the "maximum star density".

I wish I could have given you an elegant solution that wrapped it all
up neatly but as I said above, this is a nasty little problem.

Remember, your estimates of memory usage are based on the totally 
arbitrary limit of 10.0 on the star density control.  Perhaps the 
simplest thing for us to do now is to simply impose such a hard limit 
(perhaps 9.0) and then just leave the rest of this problem for another
day.

I propose we just hard code a limit of 9.0 to the star density and then
add a warning about memory usage.  We can revisit this problem at a 
later date if we want to allow power users to go to much higher star 
densities.  


-- 
Peace, James



More information about the Kstars-devel mailing list