[Kstars-devel] The next steps?

James Bowlin bowlin at mindspring.com
Fri Jun 27 20:52:22 CEST 2008


On Fri June 27 2008, Akarsh Simha wrote:
> I think it doesn't seem too ambitious. I only don't have a clear idea
> of what to do from now, till I get the larger data set. I plan to do
> the following:
>
> 1. Make the memory limiting slider functional
> 2. Use the broken PM formula to see how much of a size increase it
>    causes in the star catalogs
> 3. Add more fields to the binary file format.

This sound good.  PLMK if there is something I can do to help.

If the size increase due to PM is reasonable, you can also try actually 
duplicating stars (and perhaps create a short list of some of them) so
we can see if the duplication causes problems.

You might also want to see what happens when we change the number of 
stars per block (50, 200).  Fewer stars per block will decrease memory
wastage but will increase overhead.  I think the major overhead increase 
is in updating the LRU cache which seems pretty fast now compared to 
the time it takes to draw.

Another possible thing to work on (and maybe you already include this
in item (3) above) is to let the binary file control the mesh size, the
MAXZOOM, and maybe even the stars per block.  All those changes could
be made with the existing data set.

Another piece that needs to get worked out is a file name convention or 
whatever for star catalogs with varying depths.  By default, KStars 
should load the deepest catalog it can find.  Since some of these files
can be quite large, I think we want to give them different names to
reduce the possibility of someone accidentally over-writing one.  We
probably want to give the shallow star files similar names which will 
allow the deeper catalogs to have a larger number of global/shallow
stars.

So the shallow file needs to get identified with its approximate cutoff 
magnitude and the deep file needs the shallow cutoff and then either 
the total number of stars or the deep cutoff.  The deep cutoff is 
probably a better choice because it will be easier to sort the files by 
size.  So I would try something like:

  shallowstars-SS.S.dat
  deepstars-DD.D-SS.S.dat

where DD.D is the deep cutoff and SS.S is the shallow cutoff.  If we 
want to get fancy then we can build in something to automatically 
download the deeper catalogs upon request by the user.  I think there
is a mechanism like this already in place but I don't know if it will
be able to handle such large files.


I'm still concerned about possible delays in getting the larger data 
sets.  For example, if getting and processing 10e6 stars will take 3
days then dealing with 100E6 stars could take 30 days.


-- 
Peace, James


More information about the Kstars-devel mailing list