[Kstars-devel] Precision in KStars

Henry de Valence hdevalence at hdevalence.ca
Tue Jun 25 14:09:01 UTC 2013


On Tue, Jun 25, 2013 at 4:46 AM, Aleksey Khudyakov
<alexey.skladnoy at gmail.com> wrote:
> On 25 June 2013 08:56, Daniel Baboiu <daniel.baboiu at shaw.ca> wrote:
>> The level of precision for single precision, if we are talking only
>> computations related to rendering, and if there are only a handful of
>> steps, would be sufficient. The precision is 1e-8, for a full circle,
>> leads to an error of about 0.01-0.02 arcsec.
>>
> It's 1e-7 actually. So quantization will be at level of 0.1 arcsec. It's
> not prohibitive but dangerously close. Would we lose precision it could
> lead to problems. Probably calculation of projections should will
> be OK. Everywhere elese we should user double just to be safe.
> It would be difficult enough to detect precision related bug.
>
> Is it possible to select between float/double at compile time by using typedef
> and maybe bunch of #defines? This way it's possible to use float for speed
> and fall back to double it won't work out.

I suppose that this is possible, but I don't think that it's a great
idea. It would be better from a code maintenance point of view to just
have one set of codepaths. IMO if floats are good enough, we should
always use them; if not, we should just accept the speed losses.

>> On the other hand, if the original data only provides say 7 digit
>> precision, the use of SP is perfectly justified.
>>
> It's more about having spare digit to lose.

It seems like floats are maybe not quite good enough, so I'll switch
to using double.

>> *: In the event that we don't have hardware OpenCL support, we can use a
>> software implementation such as pocl (http://pocl.sourceforge.net/). To be
>> very clear, even though this architecture is aimed to run entirely on the GPU,
>> this is not required.
>
> Regarding OpenCL availability. Will it work with open-source drivers? If answer
> is no then CPU performance is priority. A lot of users won't be able
> to use their
> GPU.

The plan is to use Mesa's OpenCL support on the GPU and use pocl
otherwise. Nonfree drivers are out of consideration, especially given
my experience with fglrx (spoiler: it's crap, and crashes when you
resize a terminal window).

Mesa's support is at the moment incomplete. Last I checked, they can
run bitcoin miners, but it's still not useful for real work yet, with
support maybe 6-12 months out. Of course, requiring double support for
calculations further narrows the field to devices supporting
cl_khr_fp64 or cl_amd_fp64. For instance even on the new,
high-performance Intel GPUs, fp64 is not supported, because Intel for
some reason thinks that a Haswell GPU competes with a Xeon Phi
coprocessor.

So yes, indeed the main optimization target will be OpenCL on the CPU,
at least for now. However my suspicion is that using quaternions +
good memory layout + parallellism already will give us a huge boost
over the current setup, so I'm not too worried, and I don't really
want to think too much about detailed optimization at this point, just
general principles.

Cheers, Henry


More information about the Kstars-devel mailing list