[Kstars-devel] Google SoC

Brian bnc at astronomicalresearchaustralia.org
Wed Mar 26 03:08:08 CET 2008


I agree with James's view below.
I was looking at the PM problem from the point of view of using it now.
I had forgotten about the ability to go forward and back.
I also do not think the forward/back range should be limited, so
managing the data is the solution.

Jan's points,


"4) Star motions are irrelevant for most of people. 99.9% cross
identifications are made in this century. Maybe you can just use wider
field for selecting data?"

Is valid as most people would probably just use the planetarium facility
  without realising what a great observatory controller kstars is.

"3) Most catalogs (GSC, USNOA) have its own indexing method with C
implementations. So please just add support for standart catalog
distributions. Do not reimplement it."

I have used the online facility to get corresponding images for the one
I have just taken. While this is great, I am not sure online is the way
to go for the whole catalogue. While USNOA is pretty big, I have it on
12cds, GSC fits on a disk easily. I currently use UCAC2 which is 2GB,
but given the size of disks today...
So, an option might be to install kstars with not much more than what we
have now, but allow the user who has a catalogue to specify it and then
get kstars to support it. This may be easier said than done of course.
The actual catalogues might have to be restricted. I currently use GCX
to process my images and that is restricted to THYCO and GSC.

In any event the ability to use a standard catalogue works for me:)

There is already a load/import catalogue facility.
Maybe the Google time could be used to make this facility more flexible?

Another idle thought. Remove the limiting magnitude from the user
control altogether. Have some sort of sliding scale such that as you
zoom in kstars will display all the stars it has in its catalogue down
to a certain limit. So if the user does not add any additional catalogue
they get approximately what they have now, but if they add a catalogue
they can go deeper?

Brian



James Bowlin wrote:
> I agree with both of you that the magnitude limit is a more serious
> problem.  But I disagree with the person who said we only need to
> index the stars once per release.  I wish this were the case but it
> is simply not true if we want to look far into the past or into the
> future.  If KStars was only used to see the sky as it is right now then 
> I would agree that no re-indexing would be required.  But if we want to 
> ignore re-indexing then I think we should simply limit the time frames 
> we allow the user to select instead of showing them an incorrect view 
> and merely changing the date label.  I think people who are looking far 
> into the past or the future are often interested in seeing proper 
> motions.  It would be a bug if some high PM stars simply vanished due 
> to having the wrong index.
> 
> I also think we are in agreement that we need a scheme that works
> well for the vast majority of stars and then we should tack on the
> PM re-indexing as a correction.  But I think it is important to
> keep the PM problem in mind when developing the underlying scheme
> because we don't want to paint ourselves into a corner that would
> require us to redo the entire scheme just to add the PM correction.
> 
> I think the next step to take is to get our hands on the actual star 
> data and shove it into a database such as MySQL.  My understanding
> is that if we go down to 12th magnitude we will have between one million
> and two million stars to deal with.  We current have about 125,000.
> 
> Below are some numbers for our current implementation.  Without having
> the actual new star data I would guess that all of the star counts
> below will need to be multiplied by 10x or 20x to get estimates of how
> these numbers would change with the expanded catalog.
> 
> 
> Our current re-indexing safety margin is 25 arcminutes which is slightly 
> less than half a degree.
> 
> fastest star we have has a pm of 7058 milliarcseconds/year
> 
>    125 stars have pm > 840 milliarcseconds / year
>   1252 stars have pm > 304 milliarcseconds / year
> 
> The re-index interval for each group (in centuries) is:
> 
>    25 * 60 * 10 / pm_max
> 
> Where pm_max is the highest pm in the group.
> 
>      125 stars get re-indexed every   212 years.
>     1127 stars get re-indexed every 1,785 years.
>  125,000 stars get re-indexed every 5,000 years.
> 
> 
> Uncommenting line 103 in highpmstarlist.cpp will report (on stdout)
> how many stars actually changed index when the re-indexing occurs.
> When I jumped forward 2,000 years, KStars said:
> 
>   Re-indexed 22 stars at interval  212.5
>   Re-indexed 39 stars at interval 1788.6
> 
> If these results are typical then on average only 30 stars need to
> be re-indexed every 1,000 years.  This will bump up to 300 or 600
> with the expanded catalog.  If we divide the sky into 512 sections
> then each section will average about one re-indexed star every 1,000
> years which means the idea I mentioned previously of having separate
> pre-computed lists of re-indexed stars seems very doable.
> 
> One possible (but probably not optimal) implementation would be to
> add a binary flag for each star saying whether it should be displayed
> or not and then let the small subset of stars that might need to be
> re-indexed exist in multiple sections but they would have the display 
> flag set to true in only one section at a time.  Then when the time 
> changes, we would check a master list and simply flip the bits of only 
> the stars that are in memory that need to be re-indexed.  One benefit 
> of this scheme is it would allow us to reduce the re-indexing safety 
> margin down to zero but we would still need safety margins for other 
> factors.  If we store the stars in roughly fixed sized blocks of 500 or 
> so stars per block then we should be able to keep a list of block ids 
> and pointers so we can quickly find any star by its block id and its 
> offset into the block.  Swapping stars in from disk in blocks should
> also drastically increase the efficiency when the user pans and zooms.
> 
> 
> PS: I just realized Brian wants to go down to 16th magnitude.  I don't
> know how many stars this means, I'd guess an addition factor of 10x or
> 20x.  It could be that we would want to organize the star data in three 
> (or more?) levels.  The top level might be a single file going down to 
> mag 6 or so.   Then the next level could be 512 files going down to mag 
> 12.  The final level would go down to mag 16 and have 2048 or 8192 
> files.  Perhaps, users would be able to select at install time the 
> maximum magnitude they would ever want to see so we don't increase 
> their disk footprint unnecessarily.  Or perhaps the 16th magnitude data 
> could be available as a separate package.
> 
> 


More information about the Kstars-devel mailing list