[digikam-doc] /: split Editor Colo Management sections

Gilles Caulier caulier.gilles at gmail.com
Mon Aug 29 12:37:39 UTC 2016


Git commit 522f0ed23ddd52c6b1b957ed8b866f53416e32ff by Gilles Caulier.
Committed on 29/08/2016 at 12:37.
Pushed by cgilles into branch 'master'.

split Editor Colo Management sections

M  +13   -4    TODO
A  +291  -0    digikam/editor-cm-connection.docbook
A  +82   -0    digikam/editor-cm-definitions.docbook
A  +77   -0    digikam/editor-cm-intro.docbook
A  +132  -0    digikam/editor-cm-monitor.docbook
A  +62   -0    digikam/editor-cm-pcs.docbook
A  +200  -0    digikam/editor-cm-rawfile.docbook
A  +132  -0    digikam/editor-cm-rendering.docbook
A  +68   -0    digikam/editor-cm-srgb.docbook
A  +156  -0    digikam/editor-cm-wkspace.docbook
M  +12   -412  digikam/editor-cm.docbook
M  +9    -0    digikam/index.docbook
M  +9    -0    showfoto/index.docbook

http://commits.kde.org/digikam-doc/522f0ed23ddd52c6b1b957ed8b866f53416e32ff

diff --git a/TODO b/TODO
index 7029c39..37ea846 100644
--- a/TODO
+++ b/TODO
@@ -5,7 +5,7 @@ Handbook Files                             5.x Updates     Proof-Read         As
 --------------                             -----------     ----------         -----------          -------
 
 annexes-credits.docbook                    DONE            DONE               Gilles
-editor-color.docbook
+editor-color.docbook                       DONE            DONE               Gilles
 editor-color-auto.docbook
 editor-color-bw.docbook
 editor-color-correction.docbook
@@ -15,7 +15,16 @@ editor-color-levels.docbook
 editor-color-mixer.docbook
 editor-color-wb.docbook
 editor-color-neg.docbook                                                                           Need to be created
-editor-colormanagement.docbook
+editor-cm.docbook                          DONE            DONE               Gilles
+editor-cm-connection.docbook
+editor-cm-intro.docbook
+editor-cm-pcs.docbook
+editor-cm-rendering.docbook
+editor-cm-wkspace.docbook
+editor-cm-definitions.docbook
+editor-cm-monitor.docbook
+editor-cm-rawfile.docbook 
+editor-cm-srgb.docbook
 editor-decorate.docbook                    DONE            DONE               Gilles
 editor-decorate-border.docbook
 editor-decorate-inserttext.docbook
@@ -32,7 +41,7 @@ editor-enhance-inpaint.docbook
 editor-enhance-redeyes.docbook                                                                     Update to new RedEyes auto algorithm
 editor-enhance-sharpen.docbook
 editor-enhance-lenscorrection.docbook                                                              Tool based on LensFun library
-editor-filters.docbook
+editor-filters.docbook                     DONE            DONE               Gilles
 editor-filters-blurfx.docbook
 editor-filters-colorsfx.docbook                                                                    Add Lut3D and Edge filters effects
 editor-filters-emboss.docbook
@@ -42,7 +51,7 @@ editor-filters-distortionfx.docbook
 editor-filters-filmgrain.docbook
 editor-filters-raindrops.docbook
 editor-photoediting.docbook
-editor-transform.docbook
+editor-transform.docbook                   DONE            DONE               Gilles
 editor-transform-crop.docbook
 editor-transform-freerotation.docbook
 editor-transform-resize.docbook
diff --git a/digikam/editor-cm-connection.docbook b/digikam/editor-cm-connection.docbook
new file mode 100644
index 0000000..f6f3724
--- /dev/null
+++ b/digikam/editor-cm-connection.docbook
@@ -0,0 +1,291 @@
+<sect2 id="editor-cm-connection"> <title>The Color Space Connections</title>
+
+    <para>
+        So the question for each RGB trio of values in the (let us assume) 16-bit tiff produced by dcraw becomes, "What does a particular trio of RGB values for the pixels making up images produced by this particular (make and model) camera really mean in terms of some absolute standard referencing some ideal observer". This absolute standard referencing an ideal observer is more commonly called a <emphasis>Profile Connection Space</emphasis>. A camera profile is needed to accurately characterize or describe the response of a given camera's pixels to light entering that camera, so that the RGB values in the output file produced by the raw converter can be translated first into an absolute Profile Connection Space (PCS) and then from the PCS to your chosen working space. As a very important aside, for most of the open source world (including digikam), the software used to translate from the camera profile to the PCS and from the PCS to your chosen working space and eventually to your chosen output space (for printing or perhaps monitor display) is based on lcms (the <ulink url="http://littlecms.com">little color management engine</ulink>). For what it's worth, my own testing has shown that lcms does more accurate conversions than Adobe's proprietary color conversion engine. Further, for almost all raw conversion programs, including commercial closed source software such as Adobe Photoshop, the raw conversion is typically based on decoding of the proprietary raw file done by dcraw. David Coffin, author of dcraw, is the hero of raw conversion - without him we'd all be stuck using the usually windows/mac only proprietary software that comes with our digital cameras. The dcraw's interpolation algorithms (not to be confused with the aforementioned decoding of the proprietary raw file), which are part of &digikam; if properly used, produce results equal or superior to commercial, closed source software. We in the world of linux and open source software are not second-class citizens when it comes to digital imaging. Far from.
+    </para>
+
+    <para>
+        There are two commonly used Profile Connection Spaces - CIELAB and CIEXYZ (see <ulink url="http://en.wikipedia.org/wiki/Color_management">Color management</ulink>, section on color translation, then look up CIELAB and CIEXYZ on wikipedia).  Lcms uses the camera profile to translate the RGB values from the interpolated raw file, that is, the tiff produced by dcraw, into the appropriate Profile Connection Space (usually CIEXYZ - why CIEXYZ? I haven't taken the time to learn). A profile connection space is not itself a working space.  Rather a PCS is an absolute reference space used only for translating from one color space to another - think of a PCS as a Universal Translator for all the color profiles that an image might encounter in the course of its journey from camera raw file to final output:
+    </para>
+
+    <orderedlist>
+
+        <listitem><para>
+            Lcms uses the camera profile, also called an input profile, to translate the interpolated dcraw-produced RGB numbers, which only have meaning relative to your (make and model of) camera, to a second set of RGB numbers that only have meaning in the Profile Connection Space.
+        </para></listitem>
+
+        <listitem><para>
+            Lcms translates the Profile Connection Space RGB numbers to the corresponding numbers in your chosen working space so you can edit your image. And again, these working space numbers ONLY have meaning relative to a given working space.  The same red, visually speaking, is represented by different trios of RGB numbers in different working spaces; and if you assign the wrong profile the image will look wrong, slightly wrong or very wrong depending on the differences between the two profiles.
+        </para></listitem>
+
+        <listitem><para>
+            While you are editing your image in your chosen working space, then lcms should translate all the working space RGB numbers back to the PCS, and then over to the correct RGB numbers that enable your monitor (your display device) to give you the most accurate possible display representation of your image as it is being edited.  This translation for display is done on the fly and you should never even notice it happening, unless it doesn't happen correctly - then the displayed image will look wrong, perhaps a little wrong, perhaps really, really, really wrong.
+        </para></listitem>
+
+        <listitem><para>
+            When you are satisfied that your edited image is ready to share with the world, lcms translates the working space RGB numbers back into the PCS space and out again to a printer color space using a printer profile characterizing your printer/paper combination (if you plan on printing the image) or to sRGB (if you plan on displaying the image on the web or emailing it to friends or perhaps creating a slide-show to play on monitors other than your own).
+        </para></listitem>
+
+    </orderedlist>
+
+    <para>
+        To back up a little bit and look at the first color profile an image encounters, that is, the camera profile (see (1) immediately above) - dcraw can in fact apply your camera profile for you (dcraw uses lcms internally). But (i)the generating of the tiff composed of the interpolated RGB values derived from the camera raw file, and (ii)the application of the camera profile to the interpolated file, are two very distinct and totally separable (separable in theory and practice for dcraw; in theory only for most raw converters) steps.  The dcraw command line output options "-o 0 [Raw color (unique to each camera)] -4 [16-bit linear] -T [tiff]" tell dcraw to output the RGB numbers from the raw interpolation into a tiff without applying a camera input profile (the words in brackets explain the options but should not be entered at the command line). Then, if you truly enjoy working from the command line, you can use the lcms utility tifficc to apply your camera profile yourself.  The advantage of doing so is that you can tell lcms to use high quality conversion (dcraw seems to use the lcms default medium).  The disadvantage, of course, is that applying your camera profile from the command line adds one extra step to your raw workflow.
+    </para>
+
+    <sect3 id="using-iccprofile">
+        <title>Where to find camera profiles</title>
+
+        <para>
+            So where do we get these elusive and oh-so-necessary camera-specific profiles that we need to translate our interpolated raw files to a working color space? The <ulink url="http://ufraw.sourceforge.net/Colors.html">UFRAW website </ulink> section on color management has a bit of information on where to find ready-made camera profiles. It's an unfortunate fact of digital imaging that the camera profiles supplied by Canon, Nikon, and the like don't work as well with raw converters other than each camera manufacturer's own proprietary raw converter. Which is why Bibble and Phase One (and Adobe, but ACR hides the Adobe-made profiles inside the program code), for example, have to make their own profiles for all the cameras that they support - keep this proprietary propensity of your camera manufacturer in mind next time you buy a digital camera.
+        </para>
+
+        <para>
+            But back to finding a camera profile for your camera - the real answer (assuming you don't find a ready-made profile that makes you happy) is to make your own camera profile or have one made for you.  There are quite a few commercial services who provide profiling services (for a fee, of course).  Or you can use LPRof or Argyll to profile your camera yourself.  I haven't yet walked down that road so I cannot speak about how easy or difficult the process of profiling a camera might be.  But I would imagine, knowing how very meticulous the people behind Argyll, LPRof, and lcms are about color management, that making your own camera profile is very do-able and very likely the results will be better than any proprietary profile. After all, Canon (and also Bibble and Phase One for that matter) didn't profile MY camera - they just profiled a camera like mine.
+        </para>
+
+        <para>
+            Working Spaces:
+        </para>
+
+        <para>
+            So now your raw file has been interpolated by dcraw and you've obtained a camera profile and used lcms tifficc to apply your camera profile to the tiff produced by dcraw (or you've asked dcraw to apply it for you).  What does all this mean?  The real answer involves a lot of math and color science that goes way over my head and likely yours.  The short, practical answer is that neither the camera profile space nor the Profile Connection Space is an appropriate space for image editing.  Your next step is to choose a working space for image editing.  And then you (or rather the lcms color management engine that your open source digital imaging software uses) actually perform a double translation. First lcms uses the camera profile to translate the RGB values of each pixel in the dcraw-output-image-without-camera-profile-applied into the aforementioned Profile Connection Space. Then it translates the RGB values of each pixel from the PCS to your chosen working space.
+        </para>
+
+        <para>
+            Confusions and confusing terminology:
+        </para>
+
+        <para>
+            Before talking more about working spaces, some confusions and confusing terminology needs to be cleared up:
+        </para>
+
+        <para>
+            First, sRGB is both a working color space and an output color space for images intended for the web and for monitor display (if you have a spiffy new monitor with a gamut larger than the gamut covered by sRGB, obviously you might want to reconsider what output profile to use to best take advantage of your wonderful and hopefully calibrated and profiled monitor, but please convert your image to sRGB before sending it on to your friends!).  sRGB is also the color space that a lot of home and mass-production commercial printers expect image files to be in when sent to the printer.  It is also the color space that most programs assume if an image does not have an embedded color profile telling the program what color space should be used to interpret (translate) the RGB numbers. So if you choose to not use color-management, your color-management choices are simple - set everything to sRGB.
+        </para>
+
+        <para>
+            Second, all jpegs (or tiffs, if you have an older Minolta Dimage camera) coming straight out of a camera (even if produced by point-and-shoots cameras that don't allow you to save a raw file) start life inside the camera as a raw file produced by the camera's A to D converter.  The processor inside the camera interpolates the raw file, assigns a camera profile, translates the resulting RGB numbers to a working space (usually sRGB but sometimes you can choose AdobeRGB, depending on the camera), does the jpeg compression, and stores the jpeg file on your camera card.  So jpegs (or tiffs) from your camera NEVER need to be assigned a camera or input profile which is then translated to a working space via a PCS. Jpegs from a camera are already in a working space.
+        </para>
+
+        <para>Third, in case anyone is unsure on this point, note that an interpolated raw file is no longer a raw file - it has been interpolated and then output as a tiff whose RGB values need to be translated to a working space, using the camera profile, the PCS, and lcms. Fourth (strictly for future reference), to introduce a bit of commonly heard color-management terminology here - the camera profile and your printer's color profile are both device dependent, whereas the working space will be device-independent - it can be used with any image, with any properly color-managed software, without regard for where the image originated.</para>
+
+        <para>
+            Fifth, above I have used the words translate and translation as a descriptive metaphor for what lcms does when it translates RGB values from one color space to another via the PCS. The usual and correct terminology is convert and conversion, which I will use below. The four methods of conversion from one color space to another are: perceptual, relative colorimetric, absolute colorimetric, and saturation.  Which method of conversion you should use for any given image processing step from raw file to final output image is beyond the scope of this tutorial. The standard advice is: when in doubt, use perceptual.
+        </para>
+
+        <para>
+            Sixth (and again, strictly for future reference), assign a profile means change the meaning of the RGB numbers in an image by embedding a new profile without changing the actual RGB numbers associated with each pixel in the image; convert means embed a new profile, but also change the RGB numbers at the same time so that the meaning of the RGB values - that is, the real-world visible color represented by the trio of RGB numbers associated with each pixel in an image - remains the same before and after the conversion from one space to another. You should be able to do multiple conversions of an image from one working space to another, and with a properly color-managed image editor, even though all the RGB numbers in the image will change with each conversion, the image on your screen should look the same (leaving aside the usually unnoticeable small but inevitable changes from accumulated gamut mismatches and mathematical rounding errors). However, every time you assign a new working space profile rather than convert to a new working space, the appearance of the image should more or less drastically change (usually for the worse).
+        </para>
+
+        <para>
+            Finally, (and this is a crucially important point), color management is NOT only relevant if you shoot raw.  Color management affects every stage of the image processing pipeline, whether you start with a raw file that you, yourself interpolate and translate into a tiff, or if you start with a jpeg or tiff produced by your camera.
+        </para>
+
+        <para>
+            Copyrighted and copyleft working spaces:
+        </para>
+
+        <para>
+            I will take it as given that ALL the ordinarily encountered working spaces, such as:
+        </para>
+
+        <orderedlist>
+
+            <listitem><para>
+                The several variants of sRGB (see <ulink url="http://www.color.org/v4spec.xalter">color.org</ulink>).
+            </para></listitem>
+
+            <listitem><para>
+                <ulink url="http://www.brucelindbloom.com">BruceRGB</ulink>.
+            </para></listitem>
+
+            <listitem><para>
+                The various ECI (European color initiative) working space <ulink url="http://www.eci.org/doku.php?id=en:colourstandards:workingcolorspaces">profiles</ulink>.
+            </para></listitem>
+
+            <listitem><para>
+                AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and Adobe ProPhoto are the same, just branded differently) and their <ulink url="http://www.behrmann.name/index.php?option=com_content&task=view&id=34&Itemid=68">non-branded, non-copyrighted</ulink> counterparts (Oyranos includes a non-branded version of AdobeRGB.
+            </para></listitem>
+
+            <listitem><para>
+                And quite a few others that could be added to this list are all more or less suitable as working spaces. Which working space you should use depends only and solely on YOU, on YOUR requirements as the editor of YOUR digital images with YOUR eventual output intentions (web, fine art print, etc).
+            </para></listitem>
+
+        </orderedlist>
+
+        <para>
+            However, as a critical aside, if you are using Adobe (or other copyrighted) working space profiles, these profiles contain copyright information that shows up in your image exif information.  Lately I've been perusing the openicc mailing lists. Apparently lcms can be used to produce nonbranded, copyleft working space profiles that are just the same as - actually indistinguishable from - the branded, copyrighted working space profiles. It would be a wonderful addition to digikam if a set of "copyleft" working space profiles, including nonbranded, relabelled versions of ProPhotoRGB, AdobeRGB, and Adobe WidegamutRGB (perhaps in two flavors each: linear gamma and the usual gamma), could be bundled as part of the &digikam; package.
+        </para>
+
+        <para>
+            Which working space: gamma
+        </para> 
+
+        <para>
+            Now, the next question is: which working space should I use? <ulink url="http://en.wikipedia.org/wiki/Color_management#Working_spaces">Wikipedia says: </ulink> <blockquote><para>Working spaces, such as sRGB or Adobe RGB, are color spaces that facilitate good results while editing. For instance, pixels with equal values of RGB should appear neutral. Using a large (gamut) working space will lead to posterization, while using a small working space will lead to clipping. This trade-off is a consideration for the critical image editor</para></blockquote>
+        </para>
+
+        <para>
+            Well, that quote from wikipedia is about as clear as mud and I don't know if I will be able to explain it more clearly, but I will try. "[P]ixels with equal values of RGB should appear neutral" just means that for any given pixel in an image that has been converted to a suitable working space, if R=G=B you should see grey or black or white on your screen.
+        </para>
+
+        <para>I am not aware of a list of other technical requirements for a suitable working space, though undoubtedly someone has produced such a list. But most working space profiles are characterized by: </para>
+
+        <orderedlist>
+
+            <listitem><para>
+                RGB primaries which dictate the range of colors, that is, the gamut covered by a given profile.
+            </para></listitem>
+
+            <listitem><para>
+                White point, usually D50 or D65, which dictates the total dynamic range of the working space, from 0,0,0 (total black) to the brightest possible white.
+            </para></listitem>
+
+            <listitem><para>
+                Gamma.
+            </para></listitem>
+
+        </orderedlist> 
+
+        <para>
+            The practical consequences that result from using different RGB primaries, leading to larger or smaller working spaces, are discussed below. The practical consequences for different choices for the working space white point are beyond the scope of this tutorial. Here I will talk a little bit about the practical consequences of the working space gamma (for an excellent article and references, look up gamma on wikipedia).
+        </para>
+
+        <para>
+            The gamma of a color profile dictates what power transform needs to take place to properly convert from an image's embedded color profile (perhaps your working color space) to another color profile with a different gamma, such as (i)the display profile used to display the image on the screen or (ii)perhaps to a new working space, or (iii)perhaps from your working space to your printer's color space.
+        </para>
+
+        <tip>
+            <para>
+                Mathematically speaking, for a power transform you normalize the RGB numbers and raise the resulting numbers to an appropriate power depending on the respective gammas of the starting and ending color space, then renormalize the results to a new set of RGB numbers. Lcms does this for you when you ask lcms to convert from one color space to another; however, if ALL you are doing is a power transform, use imagemagick instead of lcms and just manipulate the RGB numbers directly - the results will be more accurate.
+            </para>
+        </tip>
+
+        <para>
+            One practical consequence of the gamma of a working space is that the higher the gamma, the more tones are available for editing in the shadows, with consequently fewer tones available in the highlights. So theoretically, if you are working on a very dark-toned (low key) image you might want a working space with a higher gamma.  And if you are working on a high key image, say a picture taken in full noon sunlight of a wedding dress with snow as a backdrop, you might want to choose a working space with a lower gamma, so you have more available tonal gradations in the highlights. But in the real world of real image editing, almost everyone uses working spaces with either gamma 1.8 or 2.2.
+        </para>
+
+        <para>
+            Some people are trying to standardize on gamma 2.0.  sRGB and LStar-RGB are not gamma-based working spaces. Rather, sRGB uses a <ulink url="http://en.wikipedia.org/wiki/SRGB">hybrid gamma</ulink>, and LStar-RGB uses a luminosity-based tonal response curve instead of a gamma value - see <ulink url="http://www.colormanagement.org/en/workingspaces.html">here</ulink> for more information, and then google around for more in-depth information.
+        </para>
+
+        <para>
+            In addition to gamma 1.8 and gamma 2.2 the only other gamma for a working space that gets much mention or use is gamma 1.0, also called linear gamma. <emphasis>Linear gamma</emphasis> is used in HDR (high dynamic range) imaging and also if one wants to avoid introducing gamma-induced errors into one's regular low dynamic range editing. Gamma-induced errors is a topic outside the scope of this tutorial, but see <ulink url="http://www.4p8.com/eric.brasseur/gamma.html">Gamma errors</ulink> in <ulink url="http://www.21stcenturyshoebox.com/essays/color_reproduction.html">picture scaling,</ulink> for gamma-induced color shifts.
+        </para>
+
+        <para>
+            Unfortunately and despite their undeniable mathematical advantages, linear gamma working spaces have so few tones in the shadows that (in my opinion) they are impossible to use for editing if one is working in 8-bits, and still problematic at 16-bits.  When the day comes when we are all doing our editing on 32-bit files produced by our HDR cameras on our personal supercomputers, I predict that we will all be using working spaces with gamma 1; Adobe Lightroom is already using a linear gamma working space "under the hood" and Lightzone has always used a linear gamma working space.
+        </para>
+
+        <para>
+            Which working space: <emphasis>large gamut</emphasis> or <emphasis>small gamut</emphasis>
+        </para>
+
+        <para>
+            One major consideration in choosing a working space is that some working spaces are bigger than others, meaning they cover more of the visible spectrum (and perhaps even include some imaginary colors - mathematical constructs that don't really exist).  These bigger spaces offer the advantage of allowing you to keep all the colors captured by your camera and preserved by the lcms conversion from your camera profile to the really big profile connection space.
+        </para>
+
+        <para>
+            But keeping all the possible colors comes at a price.  It seems that any given digital image (pictures of daffodils with saturated yellows being one common exception) likely only contains a small subset of all the possible visible colors that your camera is capable of capturing.  This small subset is easily contained in one of the smaller working spaces.  Using a very large working space mean that editing your image (applying curves, saturation, etc) can easily produce colors that your eventual output device (printer, monitor) simply cannot display.  So the conversion from your working space to your output device space (say your printer) will have to remap the out of gamut colors in your edited image, some of which might even be totally imaginary, to your printer color space with its much smaller gamut, leading to inaccurate colors at best and at worst to banding (posterization - gaps in what should be a smooth color transition, say, across an expanse of blue sky) and clipping (your carefully crafted muted transitions across delicate shades of red, for example, might get remapped to a solid block of dull red after conversion to your printer's color space).
+        </para>
+
+        <para>
+            In other words, large gamut working spaces, improperly handled, can lead to lost information on output. Small gamut working spaces can clip information on input. Like Wikipedia says, it's a trade-off. Here is some oft-repeated advice:
+        </para>
+
+        <orderedlist>
+
+            <listitem><para>
+                For images intended for the web, use (one of the) sRGB (variants - there are several).
+            </para></listitem>
+
+            <listitem><para>
+                For the most accuracy in your image editing (that is, making the most of your "bits" with the least risk of banding or clipping when you convert your image from your working space to an output space), use the smallest working space that includes all the colors in the scene that you photographed, plus a little extra room for those new colors you intentionally produce as you edit.
+            </para></listitem>
+
+            <listitem><para>
+                If you are working in 8-bits rather than 16-bits, choose a smaller space rather than a larger space.
+            </para></listitem>
+
+            <listitem><para>
+                For archival purposes, convert your raw file to a 16-bit tiff with a large gamut working space to avoid loosing color information. Then convert this archival tiff to your working space of choice (saving the converted working tiff under a new name, of course). See <ulink url="http://simon.tindemans.eu/essays/scenereferredworkflow">here</ulink> for more details.
+            </para></listitem>
+
+        </orderedlist>
+
+        <para>
+            The whys of these bits of advice regarding which working space are beyond the scope of this tutorial.  See Bruce Lindbloom's excellent website (<ulink url="http://www.brucelindbloom.com/">Info, Information about RGB Working Spaces</ulink>) for a visual comparison of the gamut (array of included colors) of the various working color spaces.  See <ulink url="http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml">here</ulink> and <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">here</ulink> for a pro and con presentation, respectively, of the merits of using large gamut working spaces. And while you are on the <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">cambridgeincolour.com</ulink> website, check out the tutorial on color management.
+        </para>
+
+    </sect3>
+
+    <sect3 id="softproofing">
+        <title>Soft Proofing</title>
+
+        <para>
+            Soft Proofing is a way of previewing on the screen (monitor) the result to be expected from an output on another device, typically a printer. Soft proofing will show you the difference to be expected before you actually do it (and waste your costly ink). So you can improve your settings without wasting time and money.
+        </para>
+
+    </sect3>
+
+    <sect3 id="rendering-intention">
+        <title>Rendering intention</title>
+
+        <para>
+            Rendering intent refers to the way gamuts are handled when the intended target color space cannot handle the full gamut.
+        </para>
+
+        <itemizedlist>
+
+            <listitem><para>
+                <emphasis>Perceptual</emphasis>, also called Image or Maintain Full Gamut.  This is generally recommended for photographic images. The color gamut is expanded or compressed when moving between color spaces to maintain consistent overall appearance. Low saturation colors are changed very little. More saturated colors within the gamuts of both spaces may be altered to differentiate them from saturated colors outside the smaller gamut space. Perceptual rendering applies the same gamut compression to all images, even when the image contains no significant out-of-gamut colors.
+            </para></listitem>
+
+            <listitem><para>
+                <emphasis>Relative Colorimetric</emphasis>, also called Proof or Preserve Identical Color and White Point.  Reproduces in-gamut colors exactly and clips out-of-gamut colors to the nearest reproducible hue.
+            </para></listitem>
+
+            <listitem><para>
+                <emphasis>Absolute Colorimetric</emphasis>, also called Match or Preserve Identical Colors.  Reproduces in-gamut colors exactly and clips out-of-gamut colors to the nearest reproducible hue, sacrificing saturation and possibly lightness. On tinted papers, whites may be darkened to keep the hue identical to the original. For example, cyan may be added to the white of a cream-colored paper, effectively darkening the image. Rarely of interest to photographers.
+            </para></listitem>
+
+            <listitem><para>
+                <emphasis>Saturation</emphasis>, also called Graphic or Preserve Saturation.  Maps the saturated primary colors in the source to saturated primary colors in the destination, neglecting differences in hue, saturation, or lightness. For block graphics; rarely of interest to photographers.
+            </para></listitem>
+
+        </itemizedlist>
+
+    </sect3>
+
+    <sect3 id="iccprofile-links">
+        <title>Links</title>
+
+        <itemizedlist>
+
+            <listitem><para>
+                <ulink url="http://www.oyranos.org/wiki/index.php?title=Main_Page">Color wiki</ulink>
+            </para></listitem>
+
+            <listitem><para>
+                <ulink url="http://en.wikipedia.org/wiki/CIELAB">CIELab</ulink>
+            </para></listitem>
+
+            <listitem><para>
+                <ulink url="http://en.wikipedia.org/wiki/Gamut">Gamut explained</ulink>
+            </para></listitem>
+
+        </itemizedlist>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-definitions.docbook b/digikam/editor-cm-definitions.docbook
new file mode 100644
index 0000000..68a882d
--- /dev/null
+++ b/digikam/editor-cm-definitions.docbook
@@ -0,0 +1,82 @@
+<sect2 id="editor-cm-definitions">
+    <title>More definitions about Color Management</title>
+
+    <para>
+        You've reached the end of this tutorial on color management. We've "color-managed" our way all the way from the camera and the monitor, to the working space, to the printer. I've learned a lot and I hope you have, too. What follow is some additional comments and definitions:
+    </para>
+
+    <para>
+        <emphasis>Assign</emphasis> a profile means change the meaning of the RGB numbers in an image by embedding a new profile without changing the actual RGB numbers associated with each pixel in the image. "Convert" to a profile means embed a new profile, but also change the RGB numbers at the same time so that the meaning of the RGB values - that is, the real-world visible color represented by the trio of RGB numbers associated with each pixel in an image - remains the same before and after the conversion from one space to another.
+    </para>
+
+    <para>
+        On the other hand, every time you assign a new working space profile rather than convert to a new working space (except when initially assigning a camera profile to the image file you get from your raw processing software), the appearance of the image should more or less drastically change (usually for the worse, unless the wrong profile had previously been inadvertently embedded in the image).
+    </para>
+
+    <para>
+        In theory, you should be able to do multiple conversions of an image from one working space to another, and if you are using a color-managed image editor, even though all the RGB numbers in the image will change with each conversion, the image displayed on your screen should look the same. In actual fact, because of rounding errors upon each conversion, not to mention gamut-clipping when going from a larger to a smaller working space, every time you convert from one space to another the image degrades a bit.
+    </para>
+
+    <para>
+        <emphasis>Device-dependent</emphasis> and <emphasis>device-independent</emphasis> profiles:  The camera profile, a scanner profile, your monitor's profile, and your printer's color profile are all device-dependent profiles - these profiles only work with the specific device for which they were produced by means of profiling. Working space profiles and the PCS's are "device-independent". Once an image file has been translated by LCMS via a PCS to a device-independent working space, in a sense it no longer matters what device originally produced the image. But as soon as you want to display or print the image, then the device (monitor, printer) used matters a great deal and requires a device-dependent profile.
+    </para>
+
+    <para>
+        An <emphasis>interpolated raw file</emphasis> isn't a raw file. For some reason this simple point causes a lot of confusion. But after a raw file has been interpolated by raw processing software and then output as a tiff or jpeg, the original raw file is still a raw file, of course, but the interpolated file is just an image file. It isn't a raw file.
+    </para>
+
+    <para>
+        <emphasis>Linear</emphasis> has two related and easily confused definitions. "Linear" can mean that the image tonality reflects the tonality in the original scene as photographed instead of being altered by the application of an S-curve or other means of changing local and global tonality. It can also mean that the gamma transfer curve of the color space is linear. An image can be "linear" in either, both, or neither of these two senses. A raw image as developed by dcraw is linear in both senses. The same image as developed by Canon's DPP won't be linear in either sense.
+    </para>
+
+    <para>
+        <emphasis>HDR and LDR</emphasis> do not refer to the bit-depth of the image. "High dynamic range" and "low dynamic range" refer to the total dynamic range encompassed by an image. A regular low dynamic range image, say encompassing a mere 5 "stops" (the average digital camera these days can easily accommodate 8 or 9 stops), can be saved as an 8-, 16-, 32-, or even 64-bit image, depending on your software, but the dynamic range of the image isn't thereby increased. Only the number of discrete steps from the brightest to the darkest tone in the image has changed. Conversely, a 22-stop scene (way beyond the capacity of a consumer-oriented digital camera without using multiple exposures) can be saved as an 8- or 16-bit image, but the resulting image will exhibit extreme banding (that is, it will display extreme banding in any given tonal range that can actually be displayed on a typical monitor at one time) because of the relatively few available discrete tonal steps from the lightest to the darkest tone in the image.
+    </para>
+
+    <para>
+        <emphasis>In-camera produced jpegs don't need a camera profile</emphasis>. All jpegs (or tiffs, if you have an older Minolta Dimage camera) coming straight out of a camera (even if produced by point-and-shoots cameras that don't allow you to save a raw file) start life inside the camera as a raw file produced by the camera's A to D converter. If you save your images as jpegs, then the processor inside the camera interpolates the raw file, assigns a camera profile, translates the resulting RGB numbers to a working space (usually sRGB but sometimes you can choose AdobeRGB, depending on the camera), does the jpeg compression, and stores the jpeg file on your camera card. So jpegs (or tiffs) from your camera don't need to be assigned a camera profile which is then translated to a working space via a PCS. Jpegs from a camera are already in a working space.
+    </para>
+
+    <para>
+        <emphasis>Useful mathematical information</emphasis> if you are dealing with dcraw's linear gamma output:  Mathematically speaking, when doing a gamma transform you normalize (that is, divide by 256 if you are working with 8-bit values) the RGB numbers and raise the resulting numbers to an appropriate power depending on the respective gammas of the starting and ending color space, then renormalize the results to a new set of RGB numbers. It's not hard, and very instructive, to do this with a calculator for a few sets of RGB numbers spaced from (0,0,0) to (255,255,255) to see how RGB numbers change from one gamma encoding to another. LCMS does this for you when you ask LCMS to convert from one color space to another. However, if ALL you are doing is converting from one color space to the same color space except for a different gamma, use imagemagick instead of LCMS and just manipulate the RGB numbers directly, then assign the new working space to the image - the results will be more accurate than going through a color space transform.
+    </para>
+
+    <para>
+        <emphasis>Copyrighted and copyleft working spaces</emphasis>: I will take it as given that all the ordinarily encountered working spaces, such as:
+    </para>
+
+    <orderedlist>
+
+        <listitem><para>
+            The several variants of sRGB (see color.org).
+        </para></listitem>
+
+        <listitem><para>
+            BruceRGB or BestRGB.
+        </para></listitem>
+
+        <listitem><para>
+            The various ECI (European color initiative) working space profiles.
+        </para></listitem>
+
+        <listitem><para>
+            AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and Adobe ProPhoto are the same, just branded differently) and their non-branded, non-copyrighted counterparts (Oyranos includes a non-branded version of AdobeRGB).
+        </para></listitem>
+
+    </orderedlist>
+
+    <para>
+        And quite a few other working spaces that could be added to this list, are all more or less suitable as working spaces. Which working space you should use depends only and solely on you, on your requirements as the editor of your digital images with your eventual output intentions (web, fine art print, etc). However, as a critical aside, if you are using Adobe or other copyrighted working space profiles, these profiles contain copyright information that shows up in your image exif information. Lately I've been perusing the openicc mailing lists. Apparently LCMS can be used to produce nonbranded, copyleft working space profiles that are just the same as - actually indistinguishable from - the branded, copyrighted working space profiles. It would be a wonderful addition to &digikam; if a set of "copyleft" working space profiles, including nonbranded, relabelled versions of ProPhotoRGB, AdobeRGB, and Adobe WidegamutRGB (perhaps in two flavors each: linear gamma and the usual gamma), could be bundled as part of the &digikam; package.
+    </para>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-intro.docbook b/digikam/editor-cm-intro.docbook
new file mode 100644
index 0000000..d8a7f08
--- /dev/null
+++ b/digikam/editor-cm-intro.docbook
@@ -0,0 +1,77 @@
+<sect2 id="editor-cm-intro">
+    <title>Introduction</title>
+
+    <para>
+        The point of a color-managed workflow is to ensure that the colors coming from your camera or scanner have a predictable relationship with the colors you actually photographed or scanned, that the colors displayed on your monitor match the colors coming from your camera or scanner, and that the colors you print or display on the web match the colors you produced in your digital darkroom.
+    </para>
+
+    <para>
+        <screenshot>
+            <screeninfo></screeninfo>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="&path;editor-cm-colormanagement.png" format="PNG"/>
+                </imageobject>
+                <textobject>
+                    <phrase></phrase>
+                </textobject>
+            </mediaobject>
+        </screenshot>
+    </para>
+
+    <sect3 id="buttons-to-push">
+        <title>Which buttons do I push?</title>
+
+        <para>
+            When it comes to color management, everyone wants to know, "which buttons do I push to get the results I want". Unfortunately, color management of necessity involves making informed choices at every step along the image-processing workflow. The purpose of this tutorial is to provide sufficient background information on color management, along with links to more in-depth information, to enable you to begin to make your own informed decisions, based on your own desired results.
+        </para>
+
+    </sect3>
+
+    <sect3 id="CM-needed">
+        <title>Is there anyone who doesn't need to worry about color management?</title>
+
+        <para>
+            If your imaging workflow meets all six criteria listed below, then you don't need to worry about color management.
+        </para>
+
+        <orderedlist>
+
+            <listitem><para>
+                You are working at a monitor properly calibrated to the sRGB color space (more about that below).
+            </para></listitem>
+
+            <listitem><para>
+                Your imaging workflow starts with an in-camera-produced jpeg already in the sRGB color space.
+            </para></listitem>
+
+            <listitem><para>
+                You work exclusively in the sRGB color space for editing.
+            </para></listitem>
+
+            <listitem><para>
+                Your printer wants images in the sRGB color space.
+            </para></listitem>
+
+            <listitem><para>
+                Your scanner produces images in the sRGB color space.
+            </para></listitem>
+
+            <listitem><para>
+                Your only other image output is via email or the web, where sRGB is the de facto standard.
+            </para></listitem>
+
+        </orderedlist>
+
+    </sect3>
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-monitor.docbook b/digikam/editor-cm-monitor.docbook
new file mode 100644
index 0000000..611d447
--- /dev/null
+++ b/digikam/editor-cm-monitor.docbook
@@ -0,0 +1,132 @@
+<sect2 id="editor-cm-monitor">
+    <title>Calibrating and Profiling Your Monitor RGB</title> 
+
+    <sect3>
+        <title>If I choose to work exclusively in the sRGB color space, do I need to calibrate my monitor?</title>
+
+        <para>
+            Yes! Whether you stay within the color gamut provided by sRGB or not, you need a properly calibrated monitor because sRGB assumes that your monitor is calibrated to sRGB. Your monitor calibration closes the loop. If you work within the color gamut provided by sRGB then you need to calibrate your monitor to the sRGB standard (or produce and use an accurate monitor profile, or both).
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What are the consequences of working with an uncalibrated monitor?</title>
+
+        <para>
+            There are several possible consequences, none of them good. Every monitor, calibrated or otherwise, has a native (uncalibrated) white point, expressed as a temperature in degrees Kelvin. The white point of a monitor (calibrated or not) is the color you see when you are looking at a patch of pure white on your screen. Pure white is when the RGB values in your image all equal 255 (as expressed in 8-bits), such as the plain white background of a web page or an office document. You are thinking, "white is white" but if you were able to line up several monitors calibrated to different white points, you would see that the higher the temperature of the monitor's white point, the bluer the screen looks in comparison with monitors with lower white points. If you can find the controls of your own monitor, change the temperature up and down (remembering to put it back to its initial setting when you are done, unless you decide you want a different white point). Your eyes, which adapt quickly to a constant white point, will easily discern the screen getting bluer and yellower as you move the white point higher and lower. If your uncalibrated monitor is too blue (native CRT color temperature is typically 9300K and sRGB assumes 6500K), as you edit your image you will overcompensate and produce images that will look yellowish and too warm on a properly calibrated monitor. Conversely, if your monitor is too yellow because the color temperature is set too low (I believe LCD native color temperature is around 5500K), your images will look blueish/too cool on a properly calibrated monitor. 
+        </para>
+
+        <para>
+            Setting a proper white point is only part of monitor calibration. You also need a proper black point, brightness (luminance), and gamma (transfer) function. If your monitor is too dark because the black point is set too low, you will overcompensate and produce images that look washed out on a properly calibrated monitor. Conversely, if your monitor black point is set too high, your images will look took dark and overly saturated on a properly calibrated monitor.
+        </para>
+
+        <para>
+            If the brightness/contrast is set too high, you will assume your images have a lot more "pop" than they really have when viewed on a properly calibrated monitor, plus your eyes will hurt and your LCD screen will burn out faster.
+        </para>
+
+        <para>
+            If your monitor gamma is improperly set, your tonal variations from dark to light will be off. That is, the shadows or highlights might be overly compressed or expanded, leading you to compensate in the opposite direction. So when viewed on a properly calibrated monitor, the shadows might be too bright or dark (or the highlights too dark or bright), with the rest of the image suffering from tonal over-compression. And heaven help you if the internal R, G, and B guns (or LCD equivalent) of your monitor are improperly set (each gun has its own black point and gain), because the resulting color casts - too green, too magenta, too orange, etc, that you will inevitably create by "correcting" your image during editing - are very obvious when viewed on a properly calibrated monitor.
+        </para>
+
+        <para>
+            Whether or not your monitor is properly calibrated, you might be surprised by the results of comparing an image you've edited on your home monitor to the same image as displayed by other monitors in your house or on your friend's and neighbor's monitors. We certainly were - we have two Sony Trinitron monitors in our home, one with a failing (too high) green gun and one with a failing (too high) blue gun. Every image edited on either monitor looked very wrong on the other monitor, until we purchased a spectrophotometer to calibrate and profile both monitors. Unfortunately, at this point neither of these two monitors can be calibrated to display a proper black point, so they are no longer used for image editing - the point being that an additional benefit of using a spectrophotometer is you know when it's time replace your monitor.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>The meaning of "black point" and "brightness" seems pretty clear, but what does "gamma" mean?</title>
+
+        <para>
+            See <ulink url="http://en.wikipedia.org/wiki/Gamma_correction">this Wikipedia article</ulink> for an overview of the role of gamma in monitors and photography; the links at the bottom of the article are all excellent sources of additional information. Wikipedia says "Gamma compression, also known as gamma encoding, is used to encode linear luminance or RGB values into video signals or digital video file values; gamma expansion is the inverse, or decoding, process ... Gamma encoding helps to map data (both analog and digital) into a more perceptually uniform domain."  Yeah, I know, clear as mud. Read the Wikipedia article and study the pictures. Eventually it will sink in. If you wade very deeply into image editing and color management, eventually you will need to make decisions about what gamma (or other encoding/decoding function) you want to use when you calibrate your monitor, profile your digital camera, and choose a working color space. When in doubt (for those of you who just want to know which button to push!), gamma=2.2 is a widely-used value, both for monitor calibration and working color spaces.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What's the difference between calibrating a monitor and profiling a monitor?</title>
+
+        <para>
+            When first learning about color management, many people are confused about the difference between calibrating and profiling a monitor (I know I was). Quoting from Hal Engel's excellent discussion in the &digikam; users forum:
+        </para>
+
+        <blockquote><para>
+            Calibration is a process where a device is brought into some defined state by making adjustments to its controls or some other physical means.  For example, the act of calibrating a monitor involves adjusting its white point, black level, luminosity and gamma to predetermined or standard values using the monitor's controls and by altering the video card gamma ramp... In contrast to calibration, the process of creating a profile is a characterization of the device that does not involve making any changes or adjustments to the device. Rather it is a measurement process that results in a file that contains a precise mathematical description of the device's color and tonality characteristics. This file is an ICC profile. These characteristics include the transfer function from the device's color space to a standardized absolute color space (this is called a Profile Color Space, PCS, in an ICC profile), the device's white point, black point, primaries and other information.  Displays are normally characterized (profiled) in their calibrated state. To summarize, calibration makes changes to the device to alter it's color reproduction characteristics to conform to some predetermined state.   Profiling or characterization is a measurement process that results in a detailed description of the device's (normally calibrated) color reproduction characteristics. (cited from <ulink url="http://digikam.1695700.n4.nabble.com/Re-Color-Managed-View-How-td1739136.html#a14160285">here</ulink>)
+        </para></blockquote>
+
+        <para>
+            Calibrating your monitor technically is not really part of color management. But obviously a properly calibrated and/or profiled monitor is a prerequisite for a color-managed workflow. This tutorial does not cover the important topics of how to calibrate and profile a monitor. The <ulink url="http://www.argyllcms.com/">ArgyllCMS </ulink> and <ulink url="http://lprof.sourceforge.net/">LProf </ulink> documentations are very good and highly recommended reading. To use either of this software to calibrate and/or profile your monitor, you will need a spectrophotometer. A spectrophotometer (sometimes called a "spider") is a device for measuring the RGB values of color patches projected onto the monitor screen by calibration/profiling software such as Argyll and LProf. The Argyll website maintains an up-to-date list of supported spectrophotometers. I believe LProf can use all the spectrophotometers that Argyll can use, as the two programs share the relevant sections of code.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Can I calibrate my monitor without a spectrophotometer? </title>
+
+        <para>
+            There are various methods given on the Internet for calibrating a monitor without using a spectrophotometer. These "eye-ball" methods are better than not calibrating your monitor at all, and depending your eyeball and your monitor, can produce quite usable results. But the eye-ball methods are not a substitute for a properly calibrated and profiled monitor. For the record, calibrating and profiling a monitor with a spectrophotometer, though intimidating at first, is not difficult. Spectrophotometers can be obtained for well under $100 US (if you opt for a more expensive model, make sure you are paying for a better piece of hardware, rather than just a more fully-featured accompanying bit of manufacturer's software that won't run under Linux). Argyll and/or LProf documentation will guide you through the process of calibrating and profiling your monitor, without your having to learn very much color management theory. And if/when you learn enough about color management to realize that you want or need a more detailed monitor profile of a particular type, for a particular purpose, these two softwares have all the advanced capabilities you could possibly hope for.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            Assuming I've decided to work exclusively in the sRGB color space, what "&digikam; buttons" should I push after I calibrate my monitor?
+        </title>
+
+        <para>
+            If your monitor has been calibrated to the sRGB standard and you work exclusively in the sRGB color space, then you can disable color management in digikam. You don't need to tell &digikam; what monitor profile to use because &digikam; defaults to using the sRGB color space as the monitor color space profile. And you don't need to tell &digikam; to use a color-managed workflow because &digikam; defaults to using sRGB for your camera, printer, and working space, just as laid out by HP and MS back in 1996.
+        </para>
+
+        <para>
+            But if you want to take the first steps toward a color-managed workflow, then refer to <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">corresponding page of Settings</ulink>, enable color management, and select sRGB as your monitor profile, your camera profile, your working space profile, and your printer profile. If you've also used Argyll or LProf to produce a monitor profile after you calibrated your monitor - perhaps named "mymonitorprofile.icc" - then tell &digikam; to use "mymonitorprofile.icc" instead of sRGB as your monitor profile.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Where are all the icc profiles are located on my computer?</title>
+
+        <para>
+            Well, this is Linux and it depends on where you put them. I put all my icc profiles in the /usr/share/color/icc folder, which is the closest there is at this moment to a standard Linux location for icc profiles. If you use this folder for your icc profiles, likely you will need to change permissions on the folder to allow your user read/write access. Then you just tell &digikam; <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">where your profiles are located</ulink>.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Does the lighting and wall/ceiling/drape/furniture colors near my monitor matter?</title>
+
+        <para>
+            Yes! Good lighting is a prerequisite for proper image editing and for comparing prints to the image on your screen. If the lighting near your workstation is too bright, colors on your monitor look too dark, and conversely. If the light from the fixtures in your workroom have a low CRI (color rendering index, meaning you don't have full spectrum bulbs), or if the light in your workroom comes from a window and so varies as the weather and time of day varies (or worse, is filtered through colored drapery), or if the walls and ceiling are creating color casts on your monitor, then your editing process will "correct" color casts that don't really exist. Best advice, as far as is consistent with maintaining harmony in the family: neutral grey walls and ceiling, cover the windows, wear neutral clothing, set appropriate light levels using appropriate bulbs and fixtures. For more information on what are the appropriate light levels, bulbs and fixtures for editing images and viewing prints, see the following articles:
+        </para>
+
+        <itemizedlist>
+
+            <listitem><para>
+                <ulink url="http://www.creativepro.com/article/the-darkroom-makes-a-comeback" />
+            </para></listitem>
+
+            <listitem><para>
+                <ulink url="http://www.creativepro.com/article/the-darkroom-makes-a-comeback-part-2-" />
+            </para></listitem>
+
+            <listitem><para>
+                <ulink url="http://www.lrc.rpi.edu/programs/nlpip/lightingAnswers/fullSpectrum/abstract.asp" />
+            </para></listitem>
+
+        </itemizedlist>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-pcs.docbook b/digikam/editor-cm-pcs.docbook
new file mode 100644
index 0000000..9ff3d89
--- /dev/null
+++ b/digikam/editor-cm-pcs.docbook
@@ -0,0 +1,62 @@
+<sect2 id="editor-cm-pcs">
+    <title>The Profiles Point to Real Colors in the Real World</title>
+
+    <sect3>
+        <title>
+            Camera, scanner, working space, monitor, printer - what do all these color profiles really do?
+        </title>
+
+        <para>
+            A color profile describes the color gamut of the device or space to which it belongs by specifying what real color in the real world corresponds to each trio of RGB values in the color space of the device (camera, monitor, printer) or working space.
+        </para>
+
+        <para>
+            The camera profile essentially says, "for every RGB trio of values associated with every pixel in the image file produced from the raw file by the raw processing software, "this RGB image file trio" corresponds to "that real color as seen by a real observer in the real world" (or rather, as displayed on the IT8 target if you produced your own camera profile, but it amounts to the same thing - the goal of profiling your camera is to make the picture of the target look like the target).
+        </para>
+
+        <para>
+            You cannot see an image by looking at its RGB values. Rather you see an image by displaying it on a monitor or by printing it. When you profile your monitor, you produce a monitor profile that says "this RGB trio of values that the graphics card sends to the screen" will produce on the screen "that real color as seen by a real observer in the real world".
+        </para>
+
+        <para>
+            What the monitor profile and the camera profile have in common is the part (in italics above) about "that real color as seen by a real observer in the real world."  Different trios of RGB numbers in, respectively, the monitor and camera color spaces point to the same real, visible color in the real world. Real colors in the real world provide the reference point for translating between all the color profiles your image will ever encounter on its way from camera to screen to editing program to print or the web.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>How can a color profile point to a real color in the real world?</title>
+
+        <para>
+            Real people don't even see the same colors when they look at the world, do they?
+        </para>
+
+        <para>
+            A long time ago (1931, although refinements continue to be made), the International Color Consortium decided to map out and mathematically describe all the colors visible to real people in the real world. So they showed a whole bunch of people a whole bunch of colors and asked them to say when "this" color matched "that" color, where the two visually matching colors were in fact produced by differing combinations of wavelengths. What was the value of such a strange procedure?  Human color perception depends on the fact that we have three types of cone receptors with peak sensitivity to light at wavelengths of approximately 430, 540, and 570 nm, but with considerable overlap in sensitivity between the different cone types. One consequence of how we see color is that many different combinations of differing wavelengths of light will look like "the same color".
+        </para>
+
+        <para>
+            After extensive testing, the ICC produced the CIE-XYZ color space which mathematically describes and models all the colors visible to an ideal human observer ("ideal" in the sense of modeling the tested responses of lots of individual humans). This color space is NOT a color profile in the normal sense of the word. Rather it provides an absolute "Profile Connecting Space" (PCS) for translating color RGB values from one color space to another. (See <ulink url="http://en.wikipedia.org/wiki/Tristimulus#Tristimulus_values">here</ulink> and <ulink url="http://en.wikipedia.org/wiki/Color_vision">here</ulink>.)
+        </para>
+
+        <para>
+            CIE-XYZ is not the only PCS. Another commonly used PCS is CIE-Lab, which is mathematically derived from the CIE-XYZ space. CIE-Lab is intended to be "perceptually uniform", meaning "a change of the same amount in a color value should produce a change of about the same visual importance" (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">Wikipedia article</ulink>). Wikipedia says "The three coordinates of CIELAB represent the lightness of the color (L* = 0 yields black and L* = 100 indicates diffuse white; specular white may be higher), its position between red/magenta and green (a*, negative values indicate green while positive values indicate magenta) and its position between yellow and blue (b*, negative values indicate blue and positive values indicate yellow)"  (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">Wikipedia article</ulink>).
+        </para>
+
+        <para>
+            To be useful, color profiles need to be coupled with software that performs the translation from one color space to another via the PCS. In the world of linux open source software (and also many closed source, commercial softwares), translation from one color space to another usually is done by <ulink url="http://www.littlecms.com/">LCMS</ulink>, the "little color management software". For what it's worth, my own testing has shown that LCMS does more accurate color space conversions than Adobe's proprietary color conversion engine.
+        </para>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-rawfile.docbook b/digikam/editor-cm-rawfile.docbook
new file mode 100644
index 0000000..f43d464
--- /dev/null
+++ b/digikam/editor-cm-rawfile.docbook
@@ -0,0 +1,200 @@
+<sect2 id="editor-cm-rawfile">
+    <title>The Camera Profile and Raw File Development</title>
+
+    <sect3>
+        <title>What's the next step in color management? </title>
+
+        <para>
+            First and for the record, many excellent professional and amateur photographers save all their images as in-camera jpegs and work exclusively in the sRGB color space. But if you want to work in a larger color space, or if you want to work with raw files (even if you output sRGB image files from your raw files), read on.
+        </para>
+
+        <para>
+            Judging from questions asked in the &digikam; user's forum, if you are reading this tutorial you probably are shooting raw images with a digital dSLR and you are hoping that somewhere in the arcane waters of color management lies the answer to how to get a nice picture from your raw image file. And you're right! The next thing you need is the right camera profile for developing your raw image. But first let's answer the question you really might have been asking.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            Why doesn't the image produced by raw converters like dcraw or ufraw look like the embedded preview displayed by digikam?
+        </title>
+
+        <para>
+            Glad you asked. All digital camera images start out as raw files, whether or not the camera allows the user the option to save the image as a raw file. When you ask the camera to save jpegs instead of raw files, the camera uses its on-board processor to convert the raw file to a jpeg. That embedded preview is what your final image would have looked like if you had set your camera to save jpegs instead of raw files.
+        </para>
+
+        <para>
+            From here I will speak from my experience as a Canon user, but I will guess that most or all entry-level and mid-range dSLRs behave in a similar manner. Canon offers the user several picture styles - neutral, standard, portrait, landscape, and so forth - that determine what kind of processing will be done to the raw image file to produce the final image, whether or not the processing is done "in-camera" or later, using the proprietary Canon DPP software. The Canon DPP raw processing software does give the user additional control, but still manipulates the raw image file in accordance with the chosen picture style. Most of the Canon picture styles add a heavy S-curve and extra color saturation to give the picture more "pop". Even if you choose the "neutral" picture style (the Canon picture style that gives you the least modified tonality); and select "less contrast", "less saturation", "no noise reduction", and "no sharpening" in the DPP raw development dialog, you will find, if you know what to look for, that an S-curve and also shadow denoising has been applied to your image.
+        </para>
+
+        <para>
+            Libraw (which &digikam; uses to convert raw files to image files) doesn't add an S-curve to your image tonality. Libraw gives you the lights and darks that are actually recorded by the camera sensor. According to <ulink url="http://21stcenturyshoebox.com/essays/scenereferredworkflow.html">Tindeman</ulink>, an excellent read and source of good advice, with links to equally good sources of additional information), dcraw is one of only a handful of raw developers that actually gives you the "scene-referred" tonality. Ufraw also produces a scene-referred image by default (although ufraw gives the user the option to modify the scene-referred image by changing the tonal distribution and saturation). And the dcraw/ufraw scene-referred image IS flat-looking, because the camera sensor records light linearly, whereas our eyes are constantly interacting with our brain to accommodate dim and bright areas in a scene, meaning our brain to some extent "applies an S-curve" to the scene to enable us to better focus in on the areas of particular interest as we look around
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            The embedded jpeg preview looks so much nicer than dcraw's output. What is the value in scene-referred tonality?
+        </title>
+
+        <para>
+            When you take a picture, presumably you have an idea of what you want the final image to look like.  It is much easier to achieve that final image if you don't have to "undo" stuff that has already been done to your image.  Once Canon (or Nikon, or Bibble, or etc) has applied their proprietary S-curves and shadow-denoising, sharpening, etc to your image, then your shadows, highlights, edge detail, etc, are already squashed, clipped, chopped, and otherwise altered and mangled.  You've thrown information away and you cannot get it back.  Especially in the shadows, even with 16-bit images (actually, 12- or 14-bits, depending on the camera, but it's encoded as 16-bits for the computer's convenience), there just isn't that much information to begin with.
+        </para>
+
+        <para>
+            It seems to me that the heart and soul of image processing is the deliberate manipulation of image tonality, color, selective sharpening, and so forth, such that the viewer focuses in on what you, the photographer, found of particular interest when you took the picture. Why give the art of image processing over to some proprietary raw processing software?  In other words, "flat is good" if you'd rather give your images your own artistic interpretation.  The alternative is to let the canned, proprietary algorithms produced by Canon, Nikon, Bibble, etc interpret your images for you.  (On the other hand, there is no denying that for many images, those canned algorithms are really pretty good!)
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            I can see the value in starting my image-editing with a scene-referred rendition instead of the eye-popping rendition that I see in the embedded jpeg. But I'm telling you, the images produced by digiKam/libraw look really really bad! Why?
+        </title>
+
+        <para>
+            Well, that depends. If the image looks very dark, then you asked dcraw to output a 16-bit file and you have run into a problem with dcraw not applying a gamma transform before outputting the image file. You can use imagemagick to apply the appropriate gamma transform to the image file produced by Libraw. Or you can find or make a camera profile with a gamma of 1. Or you can use ufraw, which applies the gamma transform for you.
+        </para>
+
+        <para>
+            If your image has pink highlights, there's a solution. For an explanation of the problem, along with the command line cure for this problem, see <ulink url="http://www.luminous-landscape.com/forum/index.php?topic=23430.0">this "Luminous Landscape" forum post</ulink>.
+        </para>
+
+        <para>
+            If the image isn't dark but it looks really weird, probably you made some injudicious choices in the Libraw user-interface.   The Libraw interface conveniently allows you to "dial in" options that you would otherwise have to specify at the command line. However, convenience always comes at a price.  First, the interface might not provide access to all the options that are available at the command line. And second, to get the most from the Libraw interface, you have to know what the buttons, sliders, etc in the interface actually do. Which means you need to know what happens at the command line if you want to get the best results from using the interface. (This tutorial will not attempt to document how to use the Libraw user interface. Digikam is developing at a rapid pace and anything I might write about the Libraw interface will surely be outdated in the near future.)
+        </para>
+
+        <para>
+            For example, if your embedded jpeg has very nice deep rich shadows but the Libraw-produced jpeg or tiff has blotchy red line patterns in the shadow areas, then you probably put an "x" in the "Advanced, Black point" option, with the slider set to 0. Uncheck the Black point box and try again. This box in the Libraw interface corresponds to the "-k" option when using dcraw at the command line. The "-k" option allows you to override dcraw's best estimate of where, in the shadow tones of your image, does digital signal start to override background noise. If you don't use the "-k" option at the command line, then dcraw calculates an appropriate value for you, based on its estimate of background noise. For my Canon 400d/xti, the dcraw-calculated background noise value is usually around 256 (the command line option "-v" will tell dcraw to tell you what it's doing as it processes your raw file). If, however, I use the "-K /path to blackframe.pgm" option to tell dcraw to subtract out a black frame, then dcraw will report the black point as "0", as there is now no need to set it higher to avoid the deepest shadows in the image, where noise typically drowns out signal. (A "black frame" is an exposure taken with the lens cap on, with the same exposure settings as, and ideally right after, taking the image being processed. The "-K" option allows dcraw to subtract background noise from the image.)
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Where do I find good information on digital noise?  </title>
+
+        <para>See the following excellent articles:</para>
+
+        <itemizedlist>
+
+            <listitem><para><ulink url="http://www.ronbigelow.com/articles/noise-1/noise-1.htm" /></para></listitem>
+
+            <listitem><para><ulink url="http://www.cambridgeincolour.com/tutorials/noise.htm" /></para></listitem>
+
+            <listitem><para><ulink url="http://www.clarkvision.com/imagedetail/digital.signal.to.noise/" /></para></listitem>
+
+        </itemizedlist>
+
+    </sect3>
+
+    <sect3>
+        <title>Why are the Canon and Nikon colors better than the colors produced by Libraw?</title>
+
+        <para>
+            Color rendition is one place where the Canon (and presumably Nikon) proprietary raw developing software does a really, really good job. Why?  Because the proprietary raw processing software is coupled with camera profiles that are specific to raw images coming from your make and model of camera, when processed using your make and model camera's proprietary raw processing software. I've checked extensively, using an "eyedropper" to  compare the output of various raw developers using various camera profiles from various sources - a very tedious though instructive process. With ufraw and dcraw (from the command line if not from digikam's dcraw user interface), you can apply Canon's camera-model-picture-style-specific color profile(s) to the dcraw output during the raw development process, and the colors will still NOT be exactly the same as what Canon produces. Likewise, Bibble profiles work pretty well with the Bibble software, but they don't work quite as well, in my opinion, with Libraw as they do with Bibble's own software. And so on. And so forth.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Why is a camera profile specific to a given make and model of camera? </title>
+
+        <para>
+            Digital cameras have an array of millions of little light sensors inside, making up either a CCD or a CMOS chip. These light-sensing pixels are color-blind - they only record the amount, not the color, of light falling on them. So to allow pixels to record color information, each pixel is capped by a transparent red, green, or blue lens, usually alternating in what is called a Bayer array (except for Faveon sensors, which work differently). A raw image is nothing more than an array of values indicating "how much light" passed through the red, blue, or green lens cap to reach the sensor.
+        </para>
+
+        <para>
+            Clearly, pixel response to light is the result of lots of camera-specific factors including: the nature of the sensor array itself, the precise coloring/transmissive qualities of the lens caps, and the particular analog-to-digital conversion and post-conversion processing that happens inside the camera to produce the raw image that gets stored on the card.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What does "analog-to-digital conversion" mean?</title>
+
+        <para>
+            "Analog" means continuously varying, like how much water you can put in a glass. "Digitizing" an analog signal means that the continuously changing levels from the analog signal source are "rounded" to discrete quantities convenient to the binary numbers used by computers. The analog-to-digital conversion that takes place inside the camera is necessary because the light-sensing pixels are analog in nature - they collect a charge proportionate to the amount of light that reaches them. The accumulated charge on each pixel is then turned into a discrete, digital quantity by the camera's analog-to-digital converter. Which by the way explains why a 14-bit converter is better than a 12-bit converter - more precision in the conversion output means less information is thrown away in the conversion process.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            Why is a camera profile specific to the raw processing program used to develop the raw file?
+        </title>
+
+        <para>
+            The whole point of interpolation using demosaicing algorithms such as dcraw's default AHD is to guess what color and intensity of light actually fell on any given pixel by interpolating information gathered from that single pixel plus its neighboring pixels (see <ulink url="http://en.wikipedia.org/wiki/Demosaic">Wikipedia article</ulink>). Every raw processing program makes additional assumptions such as "when is it signal and when is it background noise?",  "at what point has the sensor well reached full saturation?", and so forth. The resulting output of all these algorithms and assumptions that raw processing software makes is a trio of RGB values for each pixel in the image. Given the same raw file, different raw processors will output different RGB values.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Where do I find a generic profile for my camera?</title>
+
+        <para>
+            The ufraw website <ulink url="http://ufraw.sourceforge.net/Colors.html">section on color management</ulink> has information on where to find ready-made camera profiles. If you poke around the &digikam; users forum archives, you'll find additional advice. If you keep hunting and experimenting, likely you will find a generic profile that works "well enough". However, as stated above, it's an unfortunate fact of digital imaging that the camera profiles supplied by Canon, Nikon, and the like don't work as well with raw converters other than each camera manufacturer's own proprietary raw converter. Which is why Bibble and Phase One, for example, have to make their own profiles for all the cameras that they support. So eventually you may decide that you want a camera profile that is specific to your camera, your lighting conditions, and your raw processing workflow.
+        </para>
+
+        <para>
+            <screenshot>
+                <screeninfo></screeninfo>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="&path;editor-cm-iccworkflowlogic.png" format="PNG"/>
+                    </imageobject>
+                    <textobject>
+                    <phrase></phrase>
+                    </textobject>
+                </mediaobject>
+            </screenshot>
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            How do I get a camera profile specific to my camera, lighting conditions, and raw workflow?
+        </title>
+
+        <para>
+            Many commercial services provide profiling services, for a fee, of course. Or you can use LPRof to profile your camera yourself. If you want to profile your own camera, you will need an "IT8 target", that is, an image containing squares of known colors. Along with the IT8 target, you will receive the appropriate set of known values for each square of color on the target.
+        </para>
+
+        <para>
+            If you plan to use LProf to profile your camera, check the documentation for a list of recommended targets. To profile your camera, you photograph the IT8 target under specified lighting conditions (for example, in daylight, usually taken to mean noon on a sunny day in the summer, with nothing nearby that might cast shadows or reflect color casts) and save the image as a raw file. Then you process the raw file using your particular raw processing software+settings and run the resulting image file through the profiling software. The profiling software compares the RGB values in the image produced by your camera+lighting conditions+raw processing routine with the RGB values in the original target and then produces your camera (icc) profile.
+        </para>
+
+        <para>
+            Profiling a camera is exactly analogous to profiling a monitor. When profiling a monitor, the profiling software tells the graphics card to send squares of color with particular RGB values to the screen. The spectrophotometer measures the actual color that is produced on the screen. When profiling a camera, the known colors are the RGB colors in the original patches on the IT8 target, which the profiling software compares to the colors produced by the digital image of the target, which was photographed in selected lighting conditions, saved as raw, then processed with specific raw processing software plus settings.
+        </para>
+
+        <para>
+            <ulink url="http://lprof.sourceforge.net/help/ufraw.html">Here</ulink> is a link to a "how to" for using LProf v1.11 and ufraw (and by analogy, any other raw processor) to produce a camera profile. Debian Lenny has LProf 1.11.4 in the APT repositories. More recent versions can be built from CVS. And here is a link to an affordable, well-regarded <ulink url="http://www.targets.coloraid.de/">IT8 target</ulink>.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            How do I apply a camera profile to the 16-bit image file produced by my open source raw processing software?
+        </title>
+
+        <para>
+            If you are using the Libraw interface, <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">here</ulink> is how to tell &digikam; which camera profile to use. If you are using dcraw from the command line, you have the choice of outputting your 16-bit image file with or without the camera profile already applied. If you ask dcraw to output the file without applying the camera profile, you can use LCMS's tifficc utility (also at the command line) to apply the camera profile. The advantage of using tifficc is that you can tell LCMS to use high quality conversion (dcraw seems to use the LCMS default medium). The disadvantage, of course, is that applying your camera profile from the command line adds one extra step to your raw workflow. If you are using ufraw, consult the ufraw user's guide.
+        </para>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-rendering.docbook b/digikam/editor-cm-rendering.docbook
new file mode 100644
index 0000000..775e570
--- /dev/null
+++ b/digikam/editor-cm-rendering.docbook
@@ -0,0 +1,132 @@
+<sect2 id="editor-cm-rendering">
+    <title>Printer Profiles with Rendering Intents and Soft-Proofing</title>
+
+    <sect3>
+        <title>Where do I get a printer profile?</title>
+
+        <para>
+            Whew! We've come a long way - almost ready to print that image! Where do I get a printer profile?  Well, you already know the answer. You can use the generic profile that comes with your printer. You can purchase a professionally produced profile. If you ask, some commercial printing establishments will send you their printer profiles (which won't work with your printer!). You can make your own printer profile using Argyll, in which case your profile can be tailored to your particular paper, inks, and even image characteristics (if you are printing a series of images with a color palette limited to subdued browns, you don't need a printer profile that tries to make room for saturated cyans and blues). I cannot offer any more advice or links to more information on this subject because I've just started to learn about printing images (previously I've only viewed and shared my images via monitor display). But do see <ulink url="http://www.luminous-landscape.com/essays/fancy-graphics2.shtml">this page</ulink> for an excellent presentation of the benefits of producing your own printer profile, plus a resoundingly positive endorsement of using Argyll for making your printer profile.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What about rendering intents?</title>
+
+        <para>
+            "Rendering intent" refers to the way color gamuts are handled when the intended target color space (for example, the monitor or the printer) cannot handle the full gamut of the source color space (for example the working space).
+        </para>
+
+        <para>
+            There are four commonly-used rendering intents: perceptual, relative colorimetric, absolute colorimetric, and saturation. I have searched the Internet looking for the best write-up on rendering intents and the trade-offs involved in choosing one rendering intent over another. I believe I found that article. See <ulink url="http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm">this page</ulink>. Briefly quoting from the Cambridge in Color article on color space conversion:
+        </para>
+
+        <blockquote>
+
+            <para>
+                Perceptual and relative colorimetric rendering are probably the most useful conversion types for digital photography. Each places a different priority on how they render colors within the gamut mismatch region.
+            </para>
+
+            <para>
+                Relative colorimetric maintains a near exact relationship between in gamut colors, even if this clips out of gamut colors.
+            </para>
+
+            <para>
+                In contrast, perceptual rendering tries to also preserve some relationship between out of gamut colors, even if this results in inaccuracies for in gamut colors...
+            </para>
+
+            <para>
+                Absolute is similar to relative colorimetric in that it preserves in gamut colors and clips those out of gamut, but they differ in how each handles the white point... Relative colorimetric skews the colors within gamut so that the white point of one space aligns with that of the other, while absolute colorimetric preserves colors exactly (without regard to changing white point)...
+            </para>
+
+            <para>
+                Saturation rendering intent tries to preserve saturated colors.
+            </para>
+
+        </blockquote> 
+
+        <para>
+            I would quote the entire article (it's that good), but that would involve "unfair usage" I am sure. So go check out the article for yourself. The illustrations in the article really help to clarify what the different rendering intents actually do and the compromises involved in choosing one rendering intent over another.
+        </para>
+
+        <para>
+            For another excellent source of information on rendering intents with accompanying illustrations, see <ulink url="http://www.normankoren.com/color_management.html#Rendering_intent">this page</ulink>.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What rendering intent should I use for displaying images on my monitor?</title>
+
+        <para>
+            The usual choice is relative colorimetric. I would suggest that you not use absolute colorimetric (⪚ with ufraw) unless you want very strange results.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What is soft-proofing?</title>
+
+        <para>
+            Soft-proofing will show you the differences to be expected between what you see on your screen and what you will see when you make a print. To soft-proof, you need a profile for your printer (actually, for your printer-paper combination, as the paper used affects the ink colors and the white point). If you don't like the soft-proofed image, you can make changes (in your working space, not after converting the image to your printer space!) to bring the soft-proofed image more in line with how you want the final print to look.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>What rendering intent should I use when I soft-proof?  </title>
+
+        <para>
+            I've always heard that you ought to use "perceptual intent" when transforming an image from a larger working space to a smaller printer space. But especially in light of reading the aforementioned Cambridge in Color article, it seems that perceptual intent may or may not give the best results, depending on the respective gamuts of the image and printer/paper combination in question. I've been experimenting quite a bit lately with output intents for printing. I find that for my images, the colorimetric intents give clearer, brighter colors, albeit at the cost of having to carefully re-edit the image to avoid clipping highlights and shadows. Perusal of the various photography forums and articles available on the Internet indicates that "perceptual intent" is just an option, not by any means always "the best" option.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>And what about black point compensation?</title>
+
+        <para>
+            The clearest definition I could find about black point compensation is as follows:  "BPC is a way to make adjustments between the maximum black levels of digital files and the black capabilities of various digital devices [such as a printer]" (cited from <ulink url="http://www.newsandtech.com/issues/2004/05-04/pt/05-04_blackpoint.htm">this page</ulink>).
+        </para>
+
+        <para>
+            The article from which this definition comes offers a very clear explanation (albeit Adobe-centric) of how black point compensation works and the practical consequences of using or not using it. The bottom line advice is, it depends. Try soft-proofing with black point compensation on. If you like it, use it. Otherwise, don't use it.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Can all of the rendering intents be used when converting from one color space to another?</title>
+
+        <para>
+            In a word, no. Which rendering intents are available when converting from one profile to another depends on the destination profile. Not every profile supports every rendering intent. What happens when you select an unsupported rendering intent is that LCMS quietly uses the profile's default rendering intent.
+        </para>
+
+        <para>
+            For an informative and humorous discussion of myths surrounding the use of various rendering intents, see <ulink url="http://www.colorwiki.com/wiki/Color_Management_Myths_21-25#Myth_.2321:_There_are_perceptual_rendering_intents_available_when_converting_from_scanner.2Fcamera-.3Eworkspace_or_workspace-.3Eworkspace.">here</ulink> and <ulink url="http://www.colorwiki.com/wiki/Color_Management_Myths_6-10">here</ulink>.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>
+            Why does the perceptual rendering intent get recommended so often as the "photographic" rendering intent?
+        </title>
+
+        <para>
+            It's a safe choice because it prevents clipping of shadows and highlights, although possibly at the expense of desaturating all your colors. So if you don't want to bother doing any soft-proofing, choose the perceptual rendering intent (if you don't want to soft-proof, probably you should also stick with sRGB).
+        </para>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-srgb.docbook b/digikam/editor-cm-srgb.docbook
new file mode 100644
index 0000000..f963a04
--- /dev/null
+++ b/digikam/editor-cm-srgb.docbook
@@ -0,0 +1,68 @@
+<sect2 id="editor-cm-srgb">
+    <title>The sRGB color space</title>
+
+    <sect3>
+        <title>What is so special about the sRGB color space?</title>
+
+        <para>
+            sRGB is widely accepted as a standard color profile by virtually everyone involved with consumer-oriented imaging. sRGB was proposed in 1996 by Hewlett Packard and Microsoft as a standardized color space for consumer-oriented applications. As stated in the initial HP/MS proposal:
+
+            <blockquote><para>
+                Hewlett-Packard and Microsoft propose the addition of support for a standard color space, sRGB, within the Microsoft operating systems, HP products, the Internet, and all other interested vendors. The aim of this color space is to complement the current color management strategies by enabling a third method of handling color in the operating systems, device drivers and the Internet that utilizes a simple and robust device independent color definition. This will provide good quality and backward compatibility with minimum transmission and system overhead. Based on a calibrated colorimetric RGB color space well suited to Cathode Ray Tube (CRT) monitors, television, scanners, digital cameras, and printing systems, such a space can be supported with minimum cost to software and hardware vendors...
+            </para></blockquote>
+
+            <blockquote><para>
+                Currently, the ICC [International Color Consortium]... tracks and ensures that a color is correctly mapped from the input to the output color space... by attaching a profile for the input color space to the image in question. This is appropriate for high end users. However, there are a broad range of users that do not require this level of flexibility and control. Additionally, most existing file formats do not, and may never support color profile embedding, and finally, there are a broad range of uses [that] actually discourage people from appending any extra data to their files. A common standard RGB color space addresses these issues ... by merging the many standard and non-standard RGB monitor spaces into a single standard RGB color space. Such a standard could dramatically improve the color fidelity in the desktop environment. For example, if operating system vendors provide support for a standard RGB color space, the input and output device vendors that support this standard color space could easily and confidently communicate color without further color management overhead in the most common situations. (<ulink url="http://www.w3.org/Graphics/Color/sRGB">archived copy</ulink>)
+            </para></blockquote>
+
+            To summarize, the point of the by-now almost universally adopted sRGB color space was and is to make life easier for consumers (no need to worry about color management), less expensive for manufacturers (no need to worry about compatibility between consumer-level digital cameras or scanners,  monitors, printers, and so forth), and more convenient for displaying images on the Internet (don't worry about embedding and reading icc profiles - just assume sRGB).
+        </para>
+
+        <para>
+            <emphasis>So if sRGB works so well and makes life so easy for everyone, why use any other color space and thus be forced to worry about color management issues?</emphasis>
+        </para>
+
+        <para>
+            sRGB was designed to contain colors easily displayed on consumer-oriented monitors and printed by consumer-oriented printers manufactured in 1996. This least-common-denominator set of viewable and printable colors - the technical term is "color gamut" - is much smaller than the set of colors we can see in the real world, much smaller than the set of colors today's digital cameras can capture, much smaller than the set of colors today's printers can print, and much smaller than the color gamut of the new wide gamut monitors that are beginning to enter the consumer market. For anyone who wants to make use of the wider color gamuts available today even at the consumer level, the gamut of sRGB is too small. Conversely, if you don't intend to make use of an expanded gamut of colors at any point in your digital imaging workflow, then you don't need to worry about non-sRGB color spaces and all the attending intricacies of color management.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>How small is sRGB?</title> 
+
+        <para>
+            A visual representation of the limitations of sRGB compared to the colors we actually see in the real world is presented here after. It shows a two-dimensional representation of all the colors we can see (the horseshoe-shaped region) and the colors contained in the sRGB space (the smaller triangular region).
+        </para>
+
+        <para>
+            <screenshot>
+                <screeninfo></screeninfo>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="&path;editor-cm-542px-CIExy1931-sRGB.png" format="PNG"/>
+                    </imageobject>
+                    <textobject>
+                    <phrase></phrase>
+                    </textobject>
+                </mediaobject>
+            </screenshot>
+        </para>
+
+        <para>
+            If you would like to see a two-dimensional representation of sRGB compared to some of the larger working color spaces, see Bruce Lindbloom's excellent <ulink url="http://www.brucelindbloom.com/">site</ulink>, click on "Info" then on "Information About RGB Working Spaces".
+        </para>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm-wkspace.docbook b/digikam/editor-cm-wkspace.docbook
new file mode 100644
index 0000000..99ee9b7
--- /dev/null
+++ b/digikam/editor-cm-wkspace.docbook
@@ -0,0 +1,156 @@
+<sect2 id="editor-cm-wkspace">
+    <title>The Working Space</title>
+
+    <sect3>
+        <title>
+            So I told &digikam; where to find my monitor profile and I have a camera profile that I applied to the image file produced by my raw processing software. What's the next step in color management?
+        </title>
+
+        <para>
+            You need to choose a working color space so you can edit your image. LCMS will transform your image from your camera color space to your chosen working space, via the PCS specified by your camera color profile.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Why cannot I just edit my images in the color space described by the camera profile?</title>
+
+        <para>
+            After all, the camera profile should provide the best "fit" to the colors recorded by my camera, as processed by my raw processing procedure, right?  Wikipedia says, "Working spaces, such as sRGB or Adobe RGB, are color spaces that facilitate good results while editing. For instance, pixels with equal values of RGB should appear neutral."  "[P]ixels with equal values of RGB should appear neutral" just means that for any given pixel in an image that has been converted to a suitable working space, if R=G=B you should see grey or black or white on your screen. Many camera profiles violate this "neutral" condition. I am not aware of a list of other technical requirements for a suitable working space. However, I can think of another good reason why you wouldn't want to edit your image in your camera profile color space. If you look at the size of a typical camera profile, it is on the order of a quarter to a half a megabyte or more. It's got a lot of information about all the changes that need to be made at different regions of color and tonality in the original scene, to get accurate color rendition from the RGB values that come out of the raw processor. The camera profile is accurate (at least for colors in the original target) but not particularly mathematically smooth. Working space color profiles, on the other hand, are very small in size (half a kilobyte instead of half a megabyte) because they describe a color gamut in terms of smooth, continuous mathematical functions. Working space profiles don't need to make allowances for the "messiness" of real world sensors, so the mathematical manipulations performed during image editing will go much more smoothly and accurately than if you try to edit your image while it is still in the camera color space.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Which working space should I choose?</title>
+
+        <para>
+            Everyone has an opinion. I'm just going to lay out some of the bits of information needed to make an informed choice. Working space profiles are characterized by:
+        </para>
+
+        <orderedlist>
+
+            <listitem><para>
+                Gamma (or other transfer function), which dictates how much the original linear intensity values captured by the camera sensor (and subjected to the in-camera A-to-D conversion, then interpolated by the raw processing program to produce the image file) are altered to make editing easier or more precise.
+            </para></listitem>
+
+            <listitem><para>
+                RGB primaries which dictate the range of colors, that is, the color gamut, covered by a given profile.
+            </para></listitem>
+
+            <listitem><para>
+                White point (usually D50 or D65 though other values may be used), which specifies the color temperature of the white point of the working space.
+            </para></listitem>
+
+        </orderedlist>
+
+    </sect3>
+
+    <sect3>
+        <title>What gamma should my working space have?</title>
+
+        <para>
+            The gamma of a color profile dictates what power transform needs to take place to properly convert from an image's embedded color profile (perhaps your working color space or your camera color profile) to another color profile with a different gamma, such as your chosen working space, or the display profile used to display the image on the screen or perhaps from one working space to another, or perhaps from your working space to your printer's color space. Dcraw outputs a 16-bit image with a linear gamma, which means that a histogram of the resulting image file shows the actual amount of light that each pixel on the camera sensor captured during the exposure (paraphrasing <ulink url="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm">this page</ulink>). (Which is why at present applying a camera profile to the dcraw output also requires applying an appropriate gamma transform to get to the desired working space, unless the camera profile also uses gamma=1.)
+        </para>
+
+        <para>
+            One practical consequence of the gamma of a working space is that the higher the gamma, the more discrete tones are available for editing in the shadows, with consequently fewer tones available in the highlights. Changing the gamma of an image redistributes the number of tones available in the lighter and darker areas of an image. Theoretically, if you are working on a very dark-toned (low key) image you might want a working space with a higher gamma. And if you are working on a high key image, say a picture taken in full noon sunlight of a wedding dress with snow as a backdrop, you might want to choose a working space with a lower gamma, so you have more available tonal gradations in the highlights.
+        </para>
+
+        <para>
+            Theory aside, in the real world of real image editing, almost everyone uses working spaces with either a gamma of either 1.8 or 2.2. sRGB and L*-RGB are two notable exceptions.
+        </para>
+
+        <para>
+            sRGB uses a transfer function close to that of a CRT (and thus not necessarily relevant to image editing or to display on an LCD). As Wikipedia notes, "Unlike most other RGB color spaces the sRGB gamma can not be expressed as a single numerical value. The overall gamma is approximately 2.2, consisting of a linear (gamma 1.0) section near black, and a non-linear section elsewhere involving a 2.4 exponent and a gamma (slope of log output versus log input) changing from 1.0 through about 2.3" (cited from <ulink url="http://en.wikipedia.org/wiki/SRGB">this page</ulink>), which makes for some complicated math during image processing.
+        </para>
+
+        <para>
+            L*-RGB uses as its transfer function the same perceptually uniform transfer function as the CIELab color space. "When storing colors in limited precision values" using a perceptually uniform transfer function "can improve the reproduction of tones" (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">this page</ulink>).
+        </para>
+
+        <para>
+            In addition to gamma=1.8 and gamma=2.2, the only other gamma for a working space that gets much mention or use is linear gamma, or gamma=1.0. As noted above, dcraw outputs linear gamma files if you ask for 16-bit output. Linear gamma is used in HDR (high dynamic range) imaging and also if one wants to avoid introducing gamma-induced errors into one's regular low dynamic range editing.
+        </para>
+
+        <para>
+            "Gamma-induced errors" is a topic outside the scope of this tutorial. But see "Gamma errors in picture scaling" (cited from <ulink url="http://www.4p8.com/eric.brasseur/gamma.html">this page</ulink>) for gamma-induced tonality shifts; and of course see Timo Autiokari's informative (albeit somewhat infamous) website for a whole-hearted endorsement of using linear gamma working spaces (Timo's website seems to be down at present, though archived copies of his articles are still available through google). Bruce Lindbloom mentions a commonly-encountered gamma-induced error that is caused by incorrectly calculating luminance in a nonlinear RGB working space (see <ulink url="http://www.brucelindbloom.com/index.html?WorkingSpaceInfo.html">this page</ulink>, sidenote 1). And in a similar vein, the calculations involved in mixing colors together to produce new colors (such as using a digital filter to add warmth to an image) result in gamma errors unless the new colors are calculated by first transforming all the relevant values back to their linear values.
+        </para>
+
+        <para>
+            Unfortunately and despite their undeniable mathematical advantages, linear gamma working spaces have so few tones in the shadows that (in my opinion) they are impossible to use for editing if one is working in 8-bits, and still problematic at 16-bits. When the day comes when we are all doing our editing on 32-bit files produced by our HDR cameras on our personal supercomputers, I predict that we will all be using working spaces with gamma=1. Adobe Lightroom is already using a linear gamma working space "under the hood", CS2 allows the option of using linear gamma for mixing colors, and Lightzone has always used a linear gamma working space.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>How many discrete tonal steps are there in a digital image?</title>
+
+        <para>
+            In an 8-bit image, you have 256 tonal steps from solid black to solid white. In a 16-bit image theoretically you have 65536 steps. But remember, those 16-bits started out as either 10 bits (=1024 steps), 12 bits (=4096 steps), or 14 bits (=16384 steps) as produced by the camera's A-to-D converter - the extra bits to reach 16-bits start out as just padding. The available tones are not distributed evenly from light to dark. In linear gamma mode (as the camera sensor sees things), there's a whole lot more tones in the highlights than in the shadows. Hence the advice, if you shoot raw, to "expose to the right but don't blow the highlights". See Ron Bigelow's <ulink url="http://www.ronbigelow.com/articles/articles.htm">articles on "why raw"</ulink>, for a full discussion of the distribution of available tones in a raw image.
+        </para>
+
+    </sect3>
+
+    <sect3>
+        <title>Should I use a large-gamut or a small-gamut working space? </title>
+
+        <para>
+            One major consideration in choosing a working space is that some working spaces are bigger than others, meaning they cover more of the visible spectrum (and as a consequence include some imaginary colors - mathematical constructs that don't really exist). These bigger spaces offer the advantage of allowing you to keep all the colors captured by your camera and preserved by the LCMS conversion from your camera profile to the super-wide-gamut profile connection space and out again to your chosen working space.
+        </para>
+
+        <para>
+            But keeping all the possible colors comes at a price, as explained below. And it seems that any given digital image likely only contains a small subset of all the possible visible colors that your camera is capable of capturing. This small subset is easily contained in one of the smaller working spaces (an exception requiring a larger color gamut would be a picture of a highly saturated object such as yellow daffodil).
+        </para>
+
+        <para>
+            Using a very large working space means that editing your image (applying curves, increasing saturation, etc) can easily produce colors that your eventual output device (printer, monitor) simply cannot reproduce (you cannot see these colors while you're editing, either). So the conversion from your working space to your output device space (say your printer) will have to remap the out-of-gamut colors in your edited image, some of which might even be totally imaginary, to your printer color space with its much smaller color gamut. This remapping process will lead to inaccurate colors and loss of saturation at best. Even worse, the remapping can easily lead to banding (posterization - gaps in what should be a smooth color transition, say, across an expanse of blue sky) and clipping (e.g your carefully crafted muted transitions across delicate shades of red, for example, might get remapped to a solid block of dull red after conversion to your printer's color space). Also, the experts say that 8-bit images just don't have enough tones to stretch across a wide gamut working space without banding and loss of saturation, even before conversion to an output space. So if you choose a large gamut working space, make sure you start with a 16-bit image.
+        </para>
+
+        <para>
+            To summarize, large gamut working spaces, improperly handled, can lead to lost information on output. Small gamut working spaces can clip information on input. Medium-sized gamut working spaces try to strike a happy medium. Like Wikipedia says, it's a trade-off.
+        </para>
+
+        <para>
+            Here are some oft-repeated bits of advice on choosing a working space:
+        </para>
+
+        <para>
+            <orderedlist>
+
+                <listitem><para>
+                    For images intended for the web, use (or at least convert the final image to) sRGB.
+                </para></listitem>
+
+                <listitem><para>
+                    For the most accuracy in your image editing (that is, making the most of your limited "bits" with the least risk of banding or clipping when you convert your image from your working space to an output space), use the smallest working space that includes all the colors in the scene that you photographed, plus a little extra room for those new colors you intentionally produce as you edit.
+                </para></listitem>
+
+                <listitem><para>
+                    If you are working in 8-bits rather than 16-bits, choose a smaller rather than a larger working space to avoid clipping and banding.
+                </para></listitem>
+
+                <listitem><para>
+                    For archival purposes, convert your raw file to a 16-bit tiff with a large gamut working space to avoid loosing color information. Then convert this archival tiff to your medium-gamut or large-gamut working space of choice (saving the converted working tiff under a new name, of course).
+                </para></listitem>
+
+            </orderedlist>
+        </para>
+
+        <para>
+            For more information on choosing a working space, see <ulink url="http://www.brucelindbloom.com/">this page</ulink>, Information about RGB Working Spaces for a visual comparison of the gamut (array of included colors) of the various working color spaces. See <ulink url="http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml">here</ulink> and <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">here</ulink> for a pro- and con- presentation, respectively, of the merits of using large gamut working spaces. And while you are on the cambridgeincolour.com website, check out the tutorial on color management.
+        </para>
+
+    </sect3>
+
+</sect2>
+
+<!--
+Local Variables:
+mode: sgml
+sgml-minimize-attributes:nil
+sgml-general-insert-case:lower
+sgml-indent-step:0
+sgml-indent-data:nil
+End:
+-->
diff --git a/digikam/editor-cm.docbook b/digikam/editor-cm.docbook
index cced597..19eabe9 100644
--- a/digikam/editor-cm.docbook
+++ b/digikam/editor-cm.docbook
@@ -1,415 +1,15 @@
- <sect1 id="editor-cm"> <title>RAW File Treatment and Color Management</title>
-
-    <sect2 id="editor-cm-intro"> <title>Introduction</title>
-      <para>The point of a color-managed workflow is to ensure that the colors coming from your camera or scanner have a predictable relationship with the colors you actually photographed or scanned, that the colors displayed on your monitor match the colors coming from your camera or scanner, and that the colors you print or display on the web match the colors you produced in your digital darkroom.</para>
-
-        <para>
-        <screenshot>
-        <screeninfo></screeninfo>
-        <mediaobject>
-        <imageobject>
-        <imagedata fileref="&path;editor-cm-colormanagement.png" format="PNG"/>
-        </imageobject>
-        <textobject>
-        <phrase></phrase>
-        </textobject>
-        </mediaobject>
-        </screenshot>
-        </para>
-
-      <sect3 id="buttons-to-push"> <title>Which buttons do I push?</title>
-         <para>When it comes to color management, everyone wants to know, "which buttons do I push to get the results I want". Unfortunately, color management of necessity involves making informed choices at every step along the image-processing workflow. The purpose of this tutorial is to provide sufficient background information on color management, along with links to more in-depth information, to enable you to begin to make your own informed decisions, based on your own desired results.</para>
-      </sect3>
-
-      <sect3 id="CM-needed"> <title>Is there anyone who doesn't need to worry about color management?</title>
-        <para>If your imaging workflow meets all six criteria listed below, then you don't need to worry about color management. </para>
-        <orderedlist>
-            <listitem><para>you are working at a monitor properly calibrated to the sRGB color space (more about that below)</para></listitem>
-            <listitem><para>your imaging workflow starts with an in-camera-produced jpeg already in the sRGB color space</para></listitem>
-            <listitem><para>you work exclusively in the sRGB color space for editing</para></listitem>
-            <listitem><para>your printer wants images in the sRGB color space</para></listitem>
-            <listitem><para>your scanner produces images in the sRGB color space</para></listitem>
-            <listitem><para>your only other image output is via email or the web, where sRGB is the de facto standard</para></listitem>
-       </orderedlist>
-
-      </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-sRGB"> <title>The sRGB color space</title>
-       <sect3>                   <title>What is so special about the sRGB color space?</title>
-         <para>sRGB is widely accepted as a standard color profile by virtually everyone involved with consumer-oriented imaging. sRGB was proposed in 1996 by Hewlett Packard and Microsoft as a standardized color space for consumer-oriented applications. As stated in the initial HP/MS proposal:
-            <blockquote><para>Hewlett-Packard and Microsoft propose the addition of support for a standard color space, sRGB, within the Microsoft operating systems, HP products, the Internet, and all other interested vendors. The aim of this color space is to complement the current color management strategies by enabling a third method of handling color in the operating systems, device drivers and the Internet that utilizes a simple and robust device independent color definition. This will provide good quality and backward compatibility with minimum transmission and system overhead. Based on a calibrated colorimetric RGB color space well suited to Cathode Ray Tube (CRT) monitors, television, scanners, digital cameras, and printing systems, such a space can be supported with minimum cost to software and hardware vendors...</para></blockquote>
-            <blockquote><para>Currently, the ICC [International Color Consortium]... tracks and ensures that a color is correctly mapped from the input to the output color space... by attaching a profile for the input color space to the image in question. This is appropriate for high end users. However, there are a broad range of users that do not require this level of flexibility and control. Additionally, most existing file formats do not, and may never support color profile embedding, and finally, there are a broad range of uses [that] actually discourage people from appending any extra data to their files. A common standard RGB color space addresses these issues ... by merging the many standard and non-standard RGB monitor spaces into a single standard RGB color space. Such a standard could dramatically improve the color fidelity in the desktop environment. For example, if operating system vendors provide support for a standard RGB color space, the input and output device vendors that support this standard color space could easily and confidently communicate color without further color management overhead in the most common situations. (<ulink url="http://www.w3.org/Graphics/Color/sRGB">archived copy</ulink>)</para></blockquote>
-         To summarize, the point of the by-now almost universally adopted sRGB color space was and is to make life easier for consumers (no need to worry about color management), less expensive for manufacturers (no need to worry about compatibility between consumer-level digital cameras or scanners,  monitors, printers, and so forth), and more convenient for displaying images on the Internet (don't worry about embedding and reading icc profiles - just assume sRGB).</para>
-       <para><emphasis>So if sRGB works so well and makes life so easy for everyone, why use any other color space and thus be forced to worry about color management issues?</emphasis></para>
-       <para>sRGB was designed to contain colors easily displayed on consumer-oriented monitors and printed by consumer-oriented printers manufactured in 1996. This least-common-denominator set of viewable and printable colors - the technical term is "color gamut" - is much smaller than the set of colors we can see in the real world, much smaller than the set of colors today's digital cameras can capture, much smaller than the set of colors today's printers can print, and much smaller than the color gamut of the new wide gamut monitors that are beginning to enter the consumer market. For anyone who wants to make use of the wider color gamuts available today even at the consumer level, the gamut of sRGB is too small. Conversely, if you don't intend to make use of an expanded gamut of colors at any point in your digital imaging workflow, then you don't need to worry about non-sRGB color spaces and all the attending intricacies of color management.</para>
-       </sect3>
-       <sect3>                   <title>How small is sRGB?</title> 
-          <para>A visual representation of the limitations of sRGB compared to the colors we actually see in the real world is presented here after. It shows a two-dimensional representation of all the colors we can see (the horseshoe-shaped region) and the colors contained in the sRGB space (the smaller triangular region).</para>
-          <para>
-          <screenshot>
-          <screeninfo></screeninfo>
-          <mediaobject>
-           <imageobject>
-            <imagedata fileref="&path;editor-cm-542px-CIExy1931-sRGB.png" format="PNG"/>
-           </imageobject>
-           <textobject>
-            <phrase></phrase>
-           </textobject>
-          </mediaobject>
-          </screenshot>
-          </para>
-          <para>If you would like to see a two-dimensional representation of sRGB compared to some of the larger working color spaces, see Bruce Lindbloom's excellent <ulink url="http://www.brucelindbloom.com/">site</ulink>, click on "Info" then on "Information About RGB Working Spaces".</para>
-       </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-monitor"> <title>Calibrating and Profiling Your Monitor RGB</title> 
-          <sect3> <title>If I choose to work exclusively in the sRGB color space, do I need to calibrate my monitor?  </title>
-          <para>Yes!  Whether you stay within the color gamut provided by sRGB or not, you need a properly calibrated monitor because sRGB assumes that your monitor is calibrated to sRGB. Your monitor calibration closes the loop. If you work within the color gamut provided by sRGB then you need to calibrate your monitor to the sRGB standard (or produce and use an accurate monitor profile, or both). </para>
-          </sect3>
-          <sect3> <title>What are the consequences of working with an uncalibrated monitor?</title>
-          <para>There are several possible consequences, none of them good. Every monitor, calibrated or otherwise, has a native (uncalibrated) white point, expressed as a temperature in degrees Kelvin. The white point of a monitor (calibrated or not) is the color you see when you are looking at a patch of pure white on your screen. Pure white is when the RGB values in your image all equal 255 (as expressed in 8-bits), such as the plain white background of a web page or an office document. You are thinking, "white is white" but if you were able to line up several monitors calibrated to different white points, you would see that the higher the temperature of the monitor's white point, the bluer the screen looks in comparison with monitors with lower white points. If you can find the controls of your own monitor, change the temperature up and down (remembering to put it back to its initial setting when you are done, unless you decide you want a different white point). Your eyes, which adapt quickly to a constant white point, will easily discern the screen getting bluer and yellower as you move the white point higher and lower. If your uncalibrated monitor is too blue (native CRT color temperature is typically 9300K and sRGB assumes 6500K), as you edit your image you will overcompensate and produce images that will look yellowish and too warm on a properly calibrated monitor. Conversely, if your monitor is too yellow because the color temperature is set too low (I believe LCD native color temperature is around 5500K), your images will look blueish/too cool on a properly calibrated monitor. </para>
-          <para>Setting a proper white point is only part of monitor calibration. You also need a proper black point, brightness (luminance), and gamma (transfer) function. If your monitor is too dark because the black point is set too low, you will overcompensate and produce images that look washed out on a properly calibrated monitor. Conversely, if your monitor black point is set too high, your images will look took dark and overly saturated on a properly calibrated monitor. </para>
-          <para>If the brightness/contrast is set too high, you will assume your images have a lot more "pop" than they really have when viewed on a properly calibrated monitor, plus your eyes will hurt and your LCD screen will burn out faster. </para>
-          <para>If your monitor gamma is improperly set, your tonal variations from dark to light will be off. That is, the shadows or highlights might be overly compressed or expanded, leading you to compensate in the opposite direction. So when viewed on a properly calibrated monitor, the shadows might be too bright or dark (or the highlights too dark or bright), with the rest of the image suffering from tonal over-compression. And heaven help you if the internal R, G, and B guns (or LCD equivalent) of your monitor are improperly set (each gun has its own black point and gain), because the resulting color casts - too green, too magenta, too orange, etc, that you will inevitably create by "correcting" your image during editing - are very obvious when viewed on a properly calibrated monitor. </para>
-          <para>Whether or not your monitor is properly calibrated, you might be surprised by the results of comparing an image you've edited on your home monitor to the same image as displayed by other monitors in your house or on your friend's and neighbor's monitors. We certainly were - we have two Sony Trinitron monitors in our home, one with a failing (too high) green gun and one with a failing (too high) blue gun. Every image edited on either monitor looked very wrong on the other monitor, until we purchased a spectrophotometer to calibrate and profile both monitors. Unfortunately, at this point neither of these two monitors can be calibrated to display a proper black point, so they are no longer used for image editing - the point being that an additional benefit of using a spectrophotometer is you know when it's time replace your monitor.</para>
-         </sect3>
-          <sect3> <title>The meaning of "black point" and "brightness" seems pretty clear, but what does "gamma" mean?</title>
-          <para>See <ulink url="http://en.wikipedia.org/wiki/Gamma_correction">this Wikipedia article</ulink> for an overview of the role of gamma in monitors and photography; the links at the bottom of the article are all excellent sources of additional information. Wikipedia says "Gamma compression, also known as gamma encoding, is used to encode linear luminance or RGB values into video signals or digital video file values; gamma expansion is the inverse, or decoding, process ... Gamma encoding helps to map data (both analog and digital) into a more perceptually uniform domain."  Yeah, I know, clear as mud. Read the Wikipedia article and study the pictures. Eventually it will sink in. If you wade very deeply into image editing and color management, eventually you will need to make decisions about what gamma (or other encoding/decoding function) you want to use when you calibrate your monitor, profile your digital camera, and choose a working color space. When in doubt (for those of you who just want to know which button to push!), gamma=2.2 is a widely-used value, both for monitor calibration and working color spaces.</para>
-         </sect3>
-          <sect3> <title>What's the difference between calibrating a monitor and profiling a monitor?  </title>
-          <para>When first learning about color management, many people are confused about the difference between calibrating and profiling a monitor (I know I was). Quoting from Hal Engel's excellent discussion in the &digikam; users forum:</para>
-             <blockquote><para>Calibration is a process where a device is brought into some defined state by making adjustments to its controls or some other physical means.  For example, the act of calibrating a monitor involves adjusting its white point, black level, luminosity and gamma to predetermined or standard values using the monitor's controls and by altering the video card gamma ramp... In contrast to calibration, the process of creating a profile is a characterization of the device that does not involve making any changes or adjustments to the device. Rather it is a measurement process that results in a file that contains a precise mathematical description of the device's color and tonality characteristics. This file is an ICC profile. These characteristics include the transfer function from the device's color space to a standardized absolute color space (this is called a Profile Color Space, PCS, in an ICC profile), the device's white point, black point, primaries and other information.  Displays are normally characterized (profiled) in their calibrated state. To summarize, calibration makes changes to the device to alter it's color reproduction characteristics to conform to some predetermined state.   Profiling or characterization is a measurement process that results in a detailed description of the device's (normally calibrated) color reproduction characteristics. (cited from <ulink url="http://digikam.1695700.n4.nabble.com/Re-Color-Managed-View-How-td1739136.html#a14160285">here</ulink>)</para></blockquote>
-
-          <para>Calibrating your monitor technically is not really part of color management. But obviously a properly calibrated and/or profiled monitor is a prerequisite for a color-managed workflow. This tutorial does not cover the important topics of how to calibrate and profile a monitor. The <ulink url="http://www.argyllcms.com/">ArgyllCMS </ulink> and <ulink url="http://lprof.sourceforge.net/">LProf </ulink> documentations are very good and highly recommended reading. To use either of this software to calibrate and/or profile your monitor, you will need a spectrophotometer. A spectrophotometer (sometimes called a "spider") is a device for measuring the RGB values of color patches projected onto the monitor screen by calibration/profiling software such as Argyll and LProf. The Argyll website maintains an up-to-date list of supported spectrophotometers. I believe LProf can use all the spectrophotometers that Argyll can use, as the two programs share the relevant sections of code. </para>
-         </sect3>
-          <sect3> <title>Can I calibrate my monitor without a spectrophotometer? </title>
-          <para>There are various methods given on the Internet for calibrating a monitor without using a spectrophotometer. These "eye-ball" methods are better than not calibrating your monitor at all, and depending your eyeball and your monitor, can produce quite usable results. But the eye-ball methods are not a substitute for a properly calibrated and profiled monitor. For the record, calibrating and profiling a monitor with a spectrophotometer, though intimidating at first, is not difficult. Spectrophotometers can be obtained for well under $100 US (if you opt for a more expensive model, make sure you are paying for a better piece of hardware, rather than just a more fully-featured accompanying bit of manufacturer's software that won't run under Linux). Argyll and/or LProf documentation will guide you through the process of calibrating and profiling your monitor, without your having to learn very much color management theory. And if/when you learn enough about color management to realize that you want or need a more detailed monitor profile of a particular type, for a particular purpose, these two softwares have all the advanced capabilities you could possibly hope for.</para>
-         </sect3>
-          <sect3> <title>Assuming I've decided to work exclusively in the sRGB color space, what "&digikam; buttons" should I push after I calibrate my monitor? </title>
-          <para>If your monitor has been calibrated to the sRGB standard and you work exclusively in the sRGB color space, then you can disable color management in digikam. You don't need to tell &digikam; what monitor profile to use because &digikam; defaults to using the sRGB color space as the monitor color space profile. And you don't need to tell &digikam; to use a color-managed workflow because &digikam; defaults to using sRGB for your camera, printer, and working space, just as laid out by HP and MS back in 1996. </para>
-          <para>But if you want to take the first steps toward a color-managed workflow, then refer to <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">corresponding page of Settings</ulink>, enable color management, and select sRGB as your monitor profile, your camera profile, your working space profile, and your printer profile. If you've also used Argyll or LProf to produce a monitor profile after you calibrated your monitor - perhaps named "mymonitorprofile.icc" - then tell &digikam; to use "mymonitorprofile.icc" instead of sRGB as your monitor profile.</para>
-         </sect3>
-          <sect3> <title> Where are all the icc profiles are located on my computer? </title>
-          <para>Well, this is Linux and it depends on where you put them. I put all my icc profiles in the /usr/share/color/icc folder, which is the closest there is at this moment to a standard Linux location for icc profiles. If you use this folder for your icc profiles, likely you will need to change permissions on the folder to allow your user read/write access. Then you just tell &digikam; <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">where your profiles are located</ulink>.</para>
-         </sect3>
-          <sect3> <title>Does the lighting and wall/ceiling/drape/furniture colors near my monitor matter? </title>
-          <para>Yes!  Good lighting is a prerequisite for proper image editing and for comparing prints to the image on your screen. If the lighting near your workstation is too bright, colors on your monitor look too dark, and conversely. If the light from the fixtures in your workroom have a low CRI (color rendering index, meaning you don't have full spectrum bulbs), or if the light in your workroom comes from a window and so varies as the weather and time of day varies (or worse, is filtered through colored drapery), or if the walls and ceiling are creating color casts on your monitor, then your editing process will "correct" color casts that don't really exist. Best advice, as far as is consistent with maintaining harmony in the family: neutral grey walls and ceiling, cover the windows, wear neutral clothing, set appropriate light levels using appropriate bulbs and fixtures. For more information on what are the appropriate light levels, bulbs and fixtures for editing images and viewing prints, see the following articles:</para>
-          <itemizedlist>
-          <listitem><para><ulink url="http://www.creativepro.com/article/the-darkroom-makes-a-comeback" /></para></listitem>
-          <listitem><para><ulink url="http://www.creativepro.com/article/the-darkroom-makes-a-comeback-part-2-" /></para></listitem>
-          <listitem><para><ulink url="http://www.lrc.rpi.edu/programs/nlpip/lightingAnswers/fullSpectrum/abstract.asp" /></para></listitem>
-          </itemizedlist>
-         </sect3>
-    </sect2>
-
-     <sect2 id="editor-cm-rawfile"> <title>The Camera Profile and Raw File Development</title>
-          <sect3> <title>What's the next step in color management? </title>
-          <para>First and for the record, many excellent professional and amateur photographers save all their images as in-camera jpegs and work exclusively in the sRGB color space. But if you want to work in a larger color space, or if you want to work with raw files (even if you output sRGB image files from your raw files), read on.</para>
-          <para>Judging from questions asked in the &digikam; user's forum, if you are reading this tutorial you probably are shooting raw images with a digital dSLR and you are hoping that somewhere in the arcane waters of color management lies the answer to how to get a nice picture from your raw image file. And you're right!  The next thing you need is the right camera profile for developing your raw image. But first let's answer the question you really might have been asking:</para>
-         </sect3>
-          <sect3> <title>Why doesn't the image produced by raw converters like dcraw or ufraw look like the embedded preview displayed by digikam?</title>
-          <para>Glad you asked. All digital camera images start out as raw files, whether or not the camera allows the user the option to save the image as a raw file. When you ask the camera to save jpegs instead of raw files, the camera uses its on-board processor to convert the raw file to a jpeg. That embedded preview is what your final image would have looked like if you had set your camera to save jpegs instead of raw files. </para>
-          <para>From here I will speak from my experience as a Canon user, but I will guess that most or all entry-level and mid-range dSLRs behave in a similar manner. Canon offers the user several picture styles - neutral, standard, portrait, landscape, and so forth - that determine what kind of processing will be done to the raw image file to produce the final image, whether or not the processing is done "in-camera" or later, using the proprietary Canon DPP software. The Canon DPP raw processing software does give the user additional control, but still manipulates the raw image file in accordance with the chosen picture style. Most of the Canon picture styles add a heavy S-curve and extra color saturation to give the picture more "pop". Even if you choose the "neutral" picture style (the Canon picture style that gives you the least modified tonality); and select "less contrast", "less saturation", "no noise reduction", and "no sharpening" in the DPP raw development dialog, you will find, if you know what to look for, that an S-curve and also shadow denoising has been applied to your image.</para>
-          <para>Dcraw (which &digikam; uses to convert raw files to image files) doesn't add an S-curve to your image tonality. Dcraw gives you the lights and darks that are actually recorded by the camera sensor. According to <ulink url="http://21stcenturyshoebox.com/essays/scenereferredworkflow.html">Tindeman</ulink>, an excellent read and source of good advice, with links to equally good sources of additional information), dcraw is one of only a handful of raw developers that actually gives you the "scene-referred" tonality. Ufraw also produces a scene-referred image by default (although ufraw gives the user the option to modify the scene-referred image by changing the tonal distribution and saturation). And the dcraw/ufraw scene-referred image IS flat-looking, because the camera sensor records light linearly, whereas our eyes are constantly interacting with our brain to accommodate dim and bright areas in a scene, meaning our brain to some extent "applies an S-curve" to the scene to enable us to better focus in on the areas of particular interest as we look around</para>
-         </sect3>
-          <sect3> <title>The embedded jpeg preview looks so much nicer than dcraw's output. What is the value in scene-referred tonality? </title>
-          <para>When you take a picture, presumably you have an idea of what you want the final image to look like.  It is much easier to achieve that final image if you don't have to "undo" stuff that has already been done to your image.  Once Canon (or Nikon, or Bibble, or etc) has applied their proprietary S-curves and shadow-denoising, sharpening, etc to your image, then your shadows, highlights, edge detail, etc, are already squashed, clipped, chopped, and otherwise altered and mangled.  You've thrown information away and you cannot get it back.  Especially in the shadows, even with 16-bit images (actually, 12- or 14-bits, depending on the camera, but it's encoded as 16-bits for the computer's convenience), there just isn't that much information to begin with. </para>
-          <para>It seems to me that the heart and soul of image processing is the deliberate manipulation of image tonality, color, selective sharpening, and so forth, such that the viewer focuses in on what you, the photographer, found of particular interest when you took the picture. Why give the art of image processing over to some proprietary raw processing software?  In other words, "flat is good" if you'd rather give your images your own artistic interpretation.  The alternative is to let the canned, proprietary algorithms produced by Canon, Nikon, Bibble, etc interpret your images for you.  (On the other hand, there is no denying that for many images, those canned algorithms are really pretty good!)</para>
-         </sect3>
-          <sect3> <title>Well, that's all very interesting. I can see the value in starting my image-editing with a scene-referred rendition instead of the eye-popping rendition that I see in the embedded jpeg. But I'm telling you, the images produced by digikam/dcraw look really really bad!  Why? </title>
-          <para>Well, that depends. If the image looks very dark, then you asked dcraw to output a 16-bit file and you have run into a problem with dcraw not applying a gamma transform before outputting the image file. You can use imagemagick to apply the appropriate gamma transform to the image file produced by dcraw. Or you can find or make a camera profile with a gamma of 1. Or you can use ufraw, which applies the gamma transform for you. </para>
-          <para>If your image has pink highlights, there's a solution. For an explanation of the problem, along with the command line cure for this problem, see <ulink url="http://www.luminous-landscape.com/forum/index.php?topic=23430.0">this "Luminous Landscape" forum post</ulink>.</para>
-          <para>If the image isn't dark but it looks really weird, probably you made some injudicious choices in the digikam/dcraw user-interface.   The digikam/dcraw interface conveniently allows you to "dial in" options that you would otherwise have to specify at the command line. However, convenience always comes at a price.  First, the interface might not provide access to all the options that are available at the command line (as of &digikam; 0.9.4, only some of the dcraw command line options are available from the interface). And second, to get the most from the digikam/dcraw interface, you have to know what the buttons, sliders, etc in the interface actually do. Which means you need to know what happens at the command line if you want to get the best results from using the interface. (This tutorial will not attempt to document how to use the digikam/dcraw user interface. Digikam is developing at a rapid pace and anything I might write about the digikam/dcraw interface will surely be outdated in the near future.)</para>
-          <!--[[Gerhard, I could provide a couple of sample images here if desired]]-->
-          <para>For example, if your embedded jpeg has very nice deep rich shadows but the digikam/dcraw-produced jpeg or tiff has blotchy red line patterns in the shadow areas, then you probably put an "x" in the "Advanced, Black point" option, with the slider set to 0. Uncheck the Black point box and try again. This box in the digikam/dcraw interface corresponds to the "-k" option when using dcraw at the command line. The "-k" option allows you to override dcraw's best estimate of where, in the shadow tones of your image, does digital signal start to override background noise. If you don't use the "-k" option at the command line, then dcraw calculates an appropriate value for you, based on its estimate of background noise. For my Canon 400d/xti, the dcraw-calculated background noise value is usually around 256 (the command line option "-v" will tell dcraw to tell you what it's doing as it processes your raw file). If, however, I use the "-K /path to blackframe.pgm" option to tell dcraw to subtract out a black frame, then dcraw will report the black point as "0", as there is now no need to set it higher to avoid the deepest shadows in the image, where noise typically drowns out signal. (A "black frame" is an exposure taken with the lens cap on, with the same exposure settings as, and ideally right after, taking the image being processed. The "-K" option allows dcraw to subtract background noise from the image.)</para>
-         </sect3>
-          <sect3> <title>Where do I find good information on digital noise?  </title>
-          <para>See the following excellent articles:</para>
-          <itemizedlist>
-           <listitem><para><ulink url="http://www.ronbigelow.com/articles/noise-1/noise-1.htm" /></para></listitem>
-           <listitem><para><ulink url="http://www.cambridgeincolour.com/tutorials/noise.htm" /></para></listitem>
-           <listitem><para><ulink url="http://www.clarkvision.com/imagedetail/digital.signal.to.noise/" /></para></listitem>
-          </itemizedlist>
-         </sect3>
-          <sect3> <title>Where do I find good information on the dcraw command line options?  </title>
-          <para>The very best source of information on how dcraw processes raw files is found <ulink url="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm">here</ulink>. </para>
-          <para>If you want to work with raw files, I recommend that you read Guillermo's article two or three times over. Guillermo believes that dcraw produces output superior when compared to the raw processing done by commercial raw processors. After testing every commercial raw processing program I could find, I also eventually ended up concluding that dcraw produces superior results.</para>
-          <para>The dcraw manpage explaining all the command line options is <ulink url="http://www.cybercom.net/~dcoffin/dcraw/dcraw.1.html">here</ulink>.</para>
-         </sect3>
-          <sect3> <title>Why are the Canon and Nikon colors better than the colors produced by dcraw?  </title>
-          <para>Color rendition is one place where the Canon (and presumably Nikon) proprietary raw developing software does a really, really good job. Why?  Because the proprietary raw processing software is coupled with camera profiles that are specific to raw images coming from your make and model of camera, when processed using your make and model camera's proprietary raw processing software. I've checked extensively, using an "eyedropper" to  compare the output of various raw developers using various camera profiles from various sources - a very tedious though instructive process. With ufraw and dcraw (from the command line if not from digikam's dcraw user interface), you can apply Canon's camera-model-picture-style-specific color profile(s) to the dcraw output during the raw development process, and the colors will still NOT be exactly the same as what Canon produces. Likewise, Bibble profiles work pretty well with the Bibble software, but they don't work quite as well, in my opinion, with dcraw as they do with Bibble's own software. And so on. And so forth. </para>
-         </sect3>
-          <sect3> <title>Why is a camera profile specific to a given make and model of camera? </title>
-          <para>Digital cameras have an array of millions of little light sensors inside, making up either a CCD or a CMOS chip. These light-sensing pixels are color-blind - they only record the amount, not the color, of light falling on them. So to allow pixels to record color information, each pixel is capped by a transparent red, green, or blue lens, usually alternating in what is called a Bayer array (except for Faveon sensors, which work differently). A raw image is nothing more than an array of values indicating "how much light" passed through the red, blue, or green lens cap to reach the sensor. </para>
-          <para>Clearly, pixel response to light is the result of lots of camera-specific factors including: the nature of the sensor array itself, the precise coloring/transmissive qualities of the lens caps, and the particular analog-to-digital conversion and post-conversion processing that happens inside the camera to produce the raw image that gets stored on the card. </para>
-         </sect3>
-          <sect3> <title>What does "analog-to-digital conversion" mean?</title>
-          <para>"Analog" means continuously varying, like how much water you can put in a glass. "Digitizing" an analog signal means that the continuously changing levels from the analog signal source are "rounded" to discrete quantities convenient to the binary numbers used by computers. The analog-to-digital conversion that takes place inside the camera is necessary because the light-sensing pixels are analog in nature - they collect a charge proportionate to the amount of light that reaches them. The accumulated charge on each pixel is then turned into a discrete, digital quantity by the camera's analog-to-digital converter. Which by the way explains why a 14-bit converter is better than a 12-bit converter - more precision in the conversion output means less information is thrown away in the conversion process.</para>
-         </sect3>
-          <sect3> <title>Why is a camera profile specific to the raw processing program used to develop the raw file?</title>
-          <para>The whole point of interpolation using demosaicing algorithms such as dcraw's default AHD is to guess what color and intensity of light actually fell on any given pixel by interpolating information gathered from that single pixel plus its neighboring pixels (see <ulink url="http://en.wikipedia.org/wiki/Demosaic">Wikipedia article</ulink>). Every raw processing program makes additional assumptions such as "when is it signal and when is it background noise?",  "at what point has the sensor well reached full saturation?", and so forth. The resulting output of all these algorithms and assumptions that raw processing software makes is a trio of RGB values for each pixel in the image. Given the same raw file, different raw processors will output different RGB values. </para>
-         </sect3>
-          <sect3> <title>Where do I find a generic profile for my camera?  </title>
-          <para>The ufraw website <ulink url="http://ufraw.sourceforge.net/Colors.html">section on color management</ulink> has information on where to find ready-made camera profiles. If you poke around the &digikam; users forum archives, you'll find additional advice. If you keep hunting and experimenting, likely you will find a generic profile that works "well enough". However, as stated above, it's an unfortunate fact of digital imaging that the camera profiles supplied by Canon, Nikon, and the like don't work as well with raw converters other than each camera manufacturer's own proprietary raw converter. Which is why Bibble and Phase One, for example, have to make their own profiles for all the cameras that they support. So eventually you may decide that you want a camera profile that is specific to your camera, your lighting conditions, and your raw processing workflow.</para>
-
-          <para>
-          <screenshot>
-          <screeninfo></screeninfo>
-          <mediaobject>
-           <imageobject>
-            <imagedata fileref="&path;editor-cm-iccworkflowlogic.png" format="PNG"/>
-           </imageobject>
-           <textobject>
-            <phrase></phrase>
-           </textobject>
-          </mediaobject>
-          </screenshot>
-          </para>
-
-         </sect3>
-
-         <sect3> <title>How do I get a camera profile specific to my camera, lighting conditions, and raw workflow? </title>
-          <para>Many commercial services provide profiling services, for a fee, of course. Or you can use LPRof to profile your camera yourself. If you want to profile your own camera, you will need an "IT8 target", that is, an image containing squares of known colors. Along with the IT8 target, you will receive the appropriate set of known values for each square of color on the target. </para>
-          <para>If you plan to use LProf to profile your camera, check the documentation for a list of recommended targets. To profile your camera, you photograph the IT8 target under specified lighting conditions (for example, in daylight, usually taken to mean noon on a sunny day in the summer, with nothing nearby that might cast shadows or reflect color casts) and save the image as a raw file. Then you process the raw file using your particular raw processing software+settings and run the resulting image file through the profiling software. The profiling software compares the RGB values in the image produced by your camera+lighting conditions+raw processing routine with the RGB values in the original target and then produces your camera (icc) profile.   </para>
-          <para>Profiling a camera is exactly analogous to profiling a monitor. When profiling a monitor, the profiling software tells the graphics card to send squares of color with particular RGB values to the screen. The spectrophotometer measures the actual color that is produced on the screen. When profiling a camera, the known colors are the RGB colors in the original patches on the IT8 target, which the profiling software compares to the colors produced by the digital image of the target, which was photographed in selected lighting conditions, saved as raw, then processed with specific raw processing software+settings.  </para>
-          <para><ulink url="http://lprof.sourceforge.net/help/ufraw.html">Here</ulink> is a link to a "how to" for using LProf v1.11 and ufraw (and by analogy, any other raw processor) to produce a camera profile. Debian Lenny has LProf 1.11.4 in the APT repositories. More recent versions can be built from CVS. And here is a link to an affordable, well-regarded <ulink url="http://www.targets.coloraid.de/">IT8 target</ulink>.</para>
-         </sect3>
-          <sect3> <title>How do I apply a camera profile to the 16-bit image file produced by my open source raw processing software? </title>
-          <para>If you are using the digikam/dcraw interface, <ulink url="help:/digikam/using-setup.html#setup-iccprofiles">here</ulink> is how to tell &digikam; which camera profile to use. If you are using dcraw from the command line, you have the choice of outputting your 16-bit image file with or without the camera profile already applied. If you ask dcraw to output the file without applying the camera profile, you can use LCMS's tifficc utility (also at the command line) to apply the camera profile. The advantage of using tifficc is that you can tell LCMS to use high quality conversion (dcraw seems to use the LCMS default medium). The disadvantage, of course, is that applying your camera profile from the command line adds one extra step to your raw workflow. If you are using ufraw, consult the ufraw user's guide.</para>
-       </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-pcs"> <title>The Profiles Point to Real Colors in the Real World</title>
-        <sect3> <title>Camera, scanner, working space, monitor, printer - what do all these color profiles really do?</title>
-          <para>A color profile describes the color gamut of the device or space to which it belongs by specifying what real color in the real world corresponds to each trio of RGB values in the color space of the device (camera, monitor, printer) or working space. </para>
-          <para>The camera profile essentially says, "for every RGB trio of values associated with every pixel in the image file produced from the raw file by the raw processing software, "this RGB image file trio" corresponds to "that real color as seen by a real observer in the real world" (or rather, as displayed on the IT8 target if you produced your own camera profile, but it amounts to the same thing - the goal of profiling your camera is to make the picture of the target look like the target). </para>
-          <para>You cannot see an image by looking at its RGB values. Rather you see an image by displaying it on a monitor or by printing it. When you profile your monitor, you produce a monitor profile that says "this RGB trio of values that the graphics card sends to the screen" will produce on the screen "that real color as seen by a real observer in the real world". </para>
-          <para>What the monitor profile and the camera profile have in common is the part (in italics above) about "that real color as seen by a real observer in the real world."  Different trios of RGB numbers in, respectively, the monitor and camera color spaces point to the same real, visible color in the real world. Real colors in the real world provide the reference point for translating between all the color profiles your image will ever encounter on its way from camera to screen to editing program to print or the web. </para>
-       </sect3>
-        <sect3> <title>How can a color profile point to a real color in the real world?  </title>
-          <para>Real people don't even see the same colors when they look at the world, do they?  </para>
-          <para>A long time ago (1931, although refinements continue to be made), the International Color Consortium decided to map out and mathematically describe all the colors visible to real people in the real world. So they showed a whole bunch of people a whole bunch of colors and asked them to say when "this" color matched "that" color, where the two visually matching colors were in fact produced by differing combinations of wavelengths. What was the value of such a strange procedure?  Human color perception depends on the fact that we have three types of cone receptors with peak sensitivity to light at wavelengths of approximately 430, 540, and 570 nm, but with considerable overlap in sensitivity between the different cone types. One consequence of how we see color is that many different combinations of differing wavelengths of light will look like "the same color".</para>
-          <para>After extensive testing, the ICC produced the CIE-XYZ color space which mathematically describes and models all the colors visible to an ideal human observer ("ideal" in the sense of modeling the tested responses of lots of individual humans). This color space is NOT a color profile in the normal sense of the word. Rather it provides an absolute "Profile Connecting Space" (PCS) for translating color RGB values from one color space to another. (See <ulink url="http://en.wikipedia.org/wiki/Tristimulus#Tristimulus_values">here</ulink> and <ulink url="http://en.wikipedia.org/wiki/Color_vision">here</ulink>.)</para>
-          <para>CIE-XYZ is not the only PCS. Another commonly used PCS is CIE-Lab, which is mathematically derived from the CIE-XYZ space. CIE-Lab is intended to be "perceptually uniform", meaning "a change of the same amount in a color value should produce a change of about the same visual importance" (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">Wikipedia article</ulink>). Wikipedia says "The three coordinates of CIELAB represent the lightness of the color (L* = 0 yields black and L* = 100 indicates diffuse white; specular white may be higher), its position between red/magenta and green (a*, negative values indicate green while positive values indicate magenta) and its position between yellow and blue (b*, negative values indicate blue and positive values indicate yellow)"  (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">Wikipedia article</ulink>). </para>
-          <para>To be useful, color profiles need to be coupled with software that performs the translation from one color space to another via the PCS. In the world of linux open source software (and also many closed source, commercial softwares), translation from one color space to another usually is done by <ulink url="http://www.littlecms.com/">LCMS</ulink>, the "little color management software". For what it's worth, my own testing has shown that LCMS does more accurate color space conversions than Adobe's proprietary color conversion engine. </para>
-       </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-wkspace"> <title>The Working Space</title>
-        <sect3> <title>So I told &digikam; where to find my monitor profile and I have a camera profile that I applied to the image file produced by my raw processing software. What's the next step in color management?  </title>
-          <para>You need to choose a working color space so you can edit your image. LCMS will transform your image from your camera color space to your chosen working space, via the PCS specified by your camera color profile.</para>
-       </sect3>
-        <sect3> <title>Why cannot I just edit my images in the color space described by the camera profile? </title>
-          <para>After all, the camera profile should provide the best "fit" to the colors recorded by my camera, as processed by my raw processing procedure, right?  Wikipedia says, "Working spaces, such as sRGB or Adobe RGB, are color spaces that facilitate good results while editing. For instance, pixels with equal values of RGB should appear neutral."  "[P]ixels with equal values of RGB should appear neutral" just means that for any given pixel in an image that has been converted to a suitable working space, if R=G=B you should see grey or black or white on your screen. Many camera profiles violate this "neutral" condition. I am not aware of a list of other technical requirements for a suitable working space. However, I can think of another good reason why you wouldn't want to edit your image in your camera profile color space. If you look at the size of a typical camera profile, it is on the order of a quarter to a half a megabyte or more. It's got a lot of information about all the changes that need to be made at different regions of color and tonality in the original scene, to get accurate color rendition from the RGB values that come out of the raw processor. The camera profile is accurate (at least for colors in the original target) but not particularly mathematically smooth. Working space color profiles, on the other hand, are very small in size (half a kilobyte instead of half a megabyte) because they describe a color gamut in terms of smooth, continuous mathematical functions. Working space profiles don't need to make allowances for the "messiness" of real world sensors, so the mathematical manipulations performed during image editing will go much more smoothly and accurately than if you try to edit your image while it is still in the camera color space.</para>
-       </sect3>
-        <sect3> <title>Which working space should I choose? </title>
-          <para>Everyone has an opinion. I'm just going to lay out some of the bits of information needed to make an informed choice. Working space profiles are characterized by: </para>
-          <orderedlist>
-              <listitem><para>Gamma (or other transfer function), which dictates how much the original linear intensity values captured by the camera sensor (and subjected to the in-camera A-to-D conversion, then interpolated by the raw processing program to produce the image file) are altered to make editing easier or more precise.</para></listitem>
-              <listitem><para>RGB primaries which dictate the range of colors, that is, the color gamut, covered by a given profile.</para></listitem>
-              <listitem><para>White point (usually D50 or D65 though other values may be used), which specifies the color temperature of the white point of the working space. </para></listitem>
-          </orderedlist>
-       </sect3>
-        <sect3> <title>What gamma should my working space have?</title>
-          <para>The gamma of a color profile dictates what power transform needs to take place to properly convert from an image's embedded color profile (perhaps your working color space or your camera color profile) to another color profile with a different gamma, such as your chosen working space, or the display profile used to display the image on the screen or perhaps from one working space to another, or perhaps from your working space to your printer's color space. Dcraw outputs a 16-bit image with a linear gamma, which means that a histogram of the resulting image file shows the actual amount of light that each pixel on the camera sensor captured during the exposure (paraphrasing <ulink url="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm">this page</ulink>). (Which is why at present applying a camera profile to the dcraw output also requires applying an appropriate gamma transform to get to the desired working space, unless the camera profile also uses gamma=1.) </para>
-          <para>One practical consequence of the gamma of a working space is that the higher the gamma, the more discrete tones are available for editing in the shadows, with consequently fewer tones available in the highlights. Changing the gamma of an image redistributes the number of tones available in the lighter and darker areas of an image. Theoretically, if you are working on a very dark-toned (low key) image you might want a working space with a higher gamma. And if you are working on a high key image, say a picture taken in full noon sunlight of a wedding dress with snow as a backdrop, you might want to choose a working space with a lower gamma, so you have more available tonal gradations in the highlights. </para>
-          <para>Theory aside, in the real world of real image editing, almost everyone uses working spaces with either a gamma of either 1.8 or 2.2. sRGB and L*-RGB are two notable exceptions. </para>
-          <para>sRGB uses a transfer function close to that of a CRT (and thus not necessarily relevant to image editing or to display on an LCD). As Wikipedia notes, "Unlike most other RGB color spaces the sRGB gamma can not be expressed as a single numerical value. The overall gamma is approximately 2.2, consisting of a linear (gamma 1.0) section near black, and a non-linear section elsewhere involving a 2.4 exponent and a gamma (slope of log output versus log input) changing from 1.0 through about 2.3" (cited from <ulink url="http://en.wikipedia.org/wiki/SRGB">this page</ulink>), which makes for some complicated math during image processing. </para>
-          <para>L*-RGB uses as its transfer function the same perceptually uniform transfer function as the CIELab color space. "When storing colors in limited precision values" using a perceptually uniform transfer function "can improve the reproduction of tones" (cited from <ulink url="http://en.wikipedia.org/wiki/Lab_color_space">this page</ulink>).</para>
-          <para>In addition to gamma=1.8 and gamma=2.2, the only other gamma for a working space that gets much mention or use is linear gamma, or gamma=1.0. As noted above, dcraw outputs linear gamma files if you ask for 16-bit output. Linear gamma is used in HDR (high dynamic range) imaging and also if one wants to avoid introducing gamma-induced errors into one's regular low dynamic range editing. </para>
-          <para>"Gamma-induced errors" is a topic outside the scope of this tutorial. But see "Gamma errors in picture scaling" (cited from <ulink url="http://www.4p8.com/eric.brasseur/gamma.html">this page</ulink>) for gamma-induced tonality shifts; and of course see Timo Autiokari's informative (albeit somewhat infamous) website for a whole-hearted endorsement of using linear gamma working spaces (Timo's website seems to be down at present, though archived copies of his articles are still available through google). Bruce Lindbloom mentions a commonly-encountered gamma-induced error that is caused by incorrectly calculating luminance in a nonlinear RGB working space (see <ulink url="http://www.brucelindbloom.com/index.html?WorkingSpaceInfo.html">this page</ulink>, sidenote 1). And in a similar vein, the calculations involved in mixing colors together to produce new colors (such as using a digital filter to add warmth to an image) result in gamma errors unless the new colors are calculated by first transforming all the relevant values back to their linear values.</para>
-          <para>Unfortunately and despite their undeniable mathematical advantages, linear gamma working spaces have so few tones in the shadows that (in my opinion) they are impossible to use for editing if one is working in 8-bits, and still problematic at 16-bits. When the day comes when we are all doing our editing on 32-bit files produced by our HDR cameras on our personal supercomputers, I predict that we will all be using working spaces with gamma=1. Adobe Lightroom is already using a linear gamma working space "under the hood", CS2 allows the option of using linear gamma for mixing colors, and Lightzone has always used a linear gamma working space.</para>
-       </sect3>
-        <sect3> <title>How many discrete tonal steps are there in a digital image?</title>
-          <para>In an 8-bit image, you have 256 tonal steps from solid black to solid white. In a 16-bit image theoretically you have 65536 steps. But remember, those 16-bits started out as either 10 bits (=1024 steps), 12 bits (=4096 steps), or 14 bits (=16384 steps) as produced by the camera's A-to-D converter - the extra bits to reach 16-bits start out as just padding. The available tones are not distributed evenly from light to dark. In linear gamma mode (as the camera sensor sees things), there's a whole lot more tones in the highlights than in the shadows. Hence the advice, if you shoot raw, to "expose to the right but don't blow the highlights". See Ron Bigelow's <ulink url="http://www.ronbigelow.com/articles/articles.htm">articles on "why raw"</ulink>, for a full discussion of the distribution of available tones in a raw image. </para>
-       </sect3>
-        <sect3> <title>Should I use a large-gamut or a small-gamut working space? </title>
-          <para>One major consideration in choosing a working space is that some working spaces are bigger than others, meaning they cover more of the visible spectrum (and as a consequence include some imaginary colors - mathematical constructs that don't really exist). These bigger spaces offer the advantage of allowing you to keep all the colors captured by your camera and preserved by the LCMS conversion from your camera profile to the super-wide-gamut profile connection space and out again to your chosen working space. </para>
-          <para>But keeping all the possible colors comes at a price, as explained below. And it seems that any given digital image likely only contains a small subset of all the possible visible colors that your camera is capable of capturing. This small subset is easily contained in one of the smaller working spaces (an exception requiring a larger color gamut would be a picture of a highly saturated object such as yellow daffodil).</para>
-          <para>Using a very large working space means that editing your image (applying curves, increasing saturation, etc) can easily produce colors that your eventual output device (printer, monitor) simply cannot reproduce (you cannot see these colors while you're editing, either). So the conversion from your working space to your output device space (say your printer) will have to remap the out-of-gamut colors in your edited image, some of which might even be totally imaginary, to your printer color space with its much smaller color gamut. This remapping process will lead to inaccurate colors and loss of saturation at best. Even worse, the remapping can easily lead to banding (posterization - gaps in what should be a smooth color transition, say, across an expanse of blue sky) and clipping (e.g your carefully crafted muted transitions across delicate shades of red, for example, might get remapped to a solid block of dull red after conversion to your printer's color space). Also, the experts say that 8-bit images just don't have enough tones to stretch across a wide gamut working space without banding and loss of saturation, even before conversion to an output space. So if you choose a large gamut working space, make sure you start with a 16-bit image.</para>
-          <para>To summarize, large gamut working spaces, improperly handled, can lead to lost information on output. Small gamut working spaces can clip information on input. Medium-sized gamut working spaces try to strike a happy medium. Like Wikipedia says, it's a trade-off.</para>
-          <para>Here are some oft-repeated bits of advice on choosing a working space: </para>
-          <para>
-          <orderedlist>
-              <listitem><para>For images intended for the web, use (or at least convert the final image to) sRGB. </para></listitem>
-              <listitem><para>For the most accuracy in your image editing (that is, making the most of your limited "bits" with the least risk of banding or clipping when you convert your image from your working space to an output space), use the smallest working space that includes all the colors in the scene that you photographed, plus a little extra room for those new colors you intentionally produce as you edit. </para></listitem>
-              <listitem><para>If you are working in 8-bits rather than 16-bits, choose a smaller rather than a larger working space to avoid clipping and banding. </para></listitem>
-              <listitem><para>For archival purposes, convert your raw file to a 16-bit tiff with a large gamut working space to avoid loosing color information. Then convert this archival tiff to your medium-gamut or large-gamut working space of choice (saving the converted working tiff under a new name, of course). </para></listitem>
-          </orderedlist></para>
-          <para>For more information on choosing a working space, see <ulink url="http://www.brucelindbloom.com/">this page</ulink>, Information about RGB Working Spaces for a visual comparison of the gamut (array of included colors) of the various working color spaces. See <ulink url="http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml">here</ulink> and <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">here</ulink> for a pro- and con- presentation, respectively, of the merits of using large gamut working spaces. And while you are on the cambridgeincolour.com website, check out the tutorial on color management. </para>
-       </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-rendering"> <title>Printer Profiles with Rendering Intents and Soft-Proofing</title>
-      <sect3>   <title>Where do I get a printer profile?</title>
-          <para>Whew!  We've come a long way - almost ready to print that image! Where do I get a printer profile?  Well, you already know the answer. You can use the generic profile that comes with your printer. You can purchase a professionally produced profile. If you ask, some commercial printing establishments will send you their printer profiles (which won't work with your printer!). You can make your own printer profile using Argyll, in which case your profile can be tailored to your particular paper, inks, and even image characteristics (if you are printing a series of images with a color palette limited to subdued browns, you don't need a printer profile that tries to make room for saturated cyans and blues). I cannot offer any more advice or links to more information on this subject because I've just started to learn about printing images (previously I've only viewed and shared my images via monitor display). But do see <ulink url="http://www.luminous-landscape.com/essays/fancy-graphics2.shtml">this page</ulink> for an excellent presentation of the benefits of producing your own printer profile, plus a resoundingly positive endorsement of using Argyll for making your printer profile.</para>
-       </sect3>
-        <sect3> <title>What about rendering intents?  </title>
-          <para>"Rendering intent" refers to the way color gamuts are handled when the intended target color space (for example, the monitor or the printer) cannot handle the full gamut of the source color space (for example the working space).</para>
-          <para>There are four commonly-used rendering intents: perceptual, relative colorimetric, absolute colorimetric, and saturation. I have searched the Internet looking for the best write-up on rendering intents and the trade-offs involved in choosing one rendering intent over another. I believe I found that article. See <ulink url="http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm">this page</ulink>. Briefly quoting from the Cambridge in Color article on color space conversion:</para>
-          <blockquote><para>Perceptual and relative colorimetric rendering are probably the most useful conversion types for digital photography. Each places a different priority on how they render colors within the gamut mismatch region. </para>
-          <para>Relative colorimetric maintains a near exact relationship between in gamut colors, even if this clips out of gamut colors. </para>
-          <para>In contrast, perceptual rendering tries to also preserve some relationship between out of gamut colors, even if this results in inaccuracies for in gamut colors... </para>
-          <para>Absolute is similar to relative colorimetric in that it preserves in gamut colors and clips those out of gamut, but they differ in how each handles the white point... Relative colorimetric skews the colors within gamut so that the white point of one space aligns with that of the other, while absolute colorimetric preserves colors exactly (without regard to changing white point). ... </para>
-          <para>Saturation rendering intent tries to preserve saturated colors.</para></blockquote> 
-          <para>I would quote the entire article (it's that good), but that would involve "unfair usage" I am sure. So go check out the article for yourself. The illustrations in the article really help to clarify what the different rendering intents actually do and the compromises involved in choosing one rendering intent over another. </para>
-          <para>For another excellent source of information on rendering intents with accompanying illustrations, see <ulink url="http://www.normankoren.com/color_management.html#Rendering_intent">this page</ulink>.</para>
-       </sect3>
-        <sect3> <title>What rendering intent should I use for displaying images on my monitor? </title>
-          <para>The usual choice is relative colorimetric. I would suggest that you not use absolute colorimetric (⪚ with ufraw) unless you want very strange results.</para>
-       </sect3>
-        <sect3> <title>What is soft-proofing?  </title>
-          <para>Soft-proofing will show you the differences to be expected between what you see on your screen and what you will see when you make a print. To soft-proof, you need a profile for your printer (actually, for your printer-paper combination, as the paper used affects the ink colors and the white point). If you don't like the soft-proofed image, you can make changes (in your working space, not after converting the image to your printer space!) to bring the soft-proofed image more in line with how you want the final print to look. </para>
-       </sect3>
-        <sect3> <title>What rendering intent should I use when I soft-proof?  </title>
-          <para>I've always heard that you ought to use "perceptual intent" when transforming an image from a larger working space to a smaller printer space. But especially in light of reading the aforementioned Cambridge in Color article, it seems that perceptual intent may or may not give the best results, depending on the respective gamuts of the image and printer/paper combination in question. I've been experimenting quite a bit lately with output intents for printing. I find that for my images, the colorimetric intents give clearer, brighter colors, albeit at the cost of having to carefully re-edit the image to avoid clipping highlights and shadows. Perusal of the various photography forums and articles available on the Internet indicates that "perceptual intent" is just an option, not by any means always "the best" option. </para>
-       </sect3>
-        <sect3> <title>And what about black point compensation?</title>
-          <para>The clearest definition I could find about black point compensation is as follows:  "BPC is a way to make adjustments between the maximum black levels of digital files and the black capabilities of various digital devices [such as a printer]" (cited from <ulink url="http://www.newsandtech.com/issues/2004/05-04/pt/05-04_blackpoint.htm">this page</ulink>). </para>
-          <para>The article from which this definition comes offers a very clear explanation (albeit Adobe-centric) of how black point compensation works and the practical consequences of using or not using it. The bottom line advice is, it depends. Try soft-proofing with black point compensation on. If you like it, use it. Otherwise, don't use it.</para>
-       </sect3>
-        <sect3> <title>Can all of the rendering intents be used when converting from one color space to another? </title>
-          <para>In a word, no. Which rendering intents are available when converting from one profile to another depends on the destination profile. Not every profile supports every rendering intent. What happens when you select an unsupported rendering intent is that LCMS quietly uses the profile's default rendering intent. </para>
-          <para>For an informative and humorous discussion of myths surrounding the use of various rendering intents, see <ulink url="http://www.colorwiki.com/wiki/Color_Management_Myths_21-25#Myth_.2321:_There_are_perceptual_rendering_intents_available_when_converting_from_scanner.2Fcamera-.3Eworkspace_or_workspace-.3Eworkspace.">here</ulink> and <ulink url="http://www.colorwiki.com/wiki/Color_Management_Myths_6-10">here</ulink>.</para>
-       </sect3>
-        <sect3> <title>Why does the perceptual rendering intent get recommended so often as the "photographic" rendering intent? </title>
-          <para>It's a safe choice because it prevents clipping of shadows and highlights, although possibly at the expense of desaturating all your colors. So if you don't want to bother doing any soft-proofing, choose the perceptual rendering intent (if you don't want to soft-proof, probably you should also stick with sRGB).</para>
-       </sect3>
-     </sect2>
-
-     <sect2 id="editor-cm-definitions"> <title>More definitions about Color Management</title>
-      <para>You've reached the end of this tutorial on color management. We've "color-managed" our way all the way from the camera and the monitor, to the working space, to the printer. I've learned a lot and I hope you have, too. What follow is some additional comments and definitions:</para>
-          <para><emphasis>Assign</emphasis> a profile means change the meaning of the RGB numbers in an image by embedding a new profile without changing the actual RGB numbers associated with each pixel in the image. "Convert" to a profile means embed a new profile, but also change the RGB numbers at the same time so that the meaning of the RGB values - that is, the real-world visible color represented by the trio of RGB numbers associated with each pixel in an image - remains the same before and after the conversion from one space to another.</para>
-          <para>On the other hand, every time you assign a new working space profile rather than convert to a new working space (except when initially assigning a camera profile to the image file you get from your raw processing software), the appearance of the image should more or less drastically change (usually for the worse, unless the wrong profile had previously been inadvertently embedded in the image).</para>
-          <para>In theory, you should be able to do multiple conversions of an image from one working space to another, and if you are using a color-managed image editor, even though all the RGB numbers in the image will change with each conversion, the image displayed on your screen should look the same. In actual fact, because of rounding errors upon each conversion, not to mention gamut-clipping when going from a larger to a smaller working space, every time you convert from one space to another the image degrades a bit. </para>
-          <para><emphasis>Device-dependent</emphasis> and <emphasis>device-independent</emphasis> profiles:  The camera profile, a scanner profile, your monitor's profile, and your printer's color profile are all device-dependent profiles - these profiles only work with the specific device for which they were produced by means of profiling. Working space profiles and the PCS's are "device-independent". Once an image file has been translated by LCMS via a PCS to a device-independent working space, in a sense it no longer matters what device originally produced the image. But as soon as you want to display or print the image, then the device (monitor, printer) used matters a great deal and requires a device-dependent profile.</para>
-          <para>An <emphasis>interpolated raw file</emphasis> isn't a raw file. For some reason this simple point causes a lot of confusion. But after a raw file has been interpolated by raw processing software and then output as a tiff or jpeg, the original raw file is still a raw file, of course, but the interpolated file is just an image file. It isn't a raw file. </para>
-          <para><emphasis>Linear</emphasis> has two related and easily confused definitions. "Linear" can mean that the image tonality reflects the tonality in the original scene as photographed instead of being altered by the application of an S-curve or other means of changing local and global tonality. It can also mean that the gamma transfer curve of the color space is linear. An image can be "linear" in either, both, or neither of these two senses. A raw image as developed by dcraw is linear in both senses. The same image as developed by Canon's DPP won't be linear in either sense.</para>
-          <para><emphasis>HDR and LDR</emphasis> do not refer to the bit-depth of the image. "High dynamic range" and "low dynamic range" refer to the total dynamic range encompassed by an image. A regular low dynamic range image, say encompassing a mere 5 "stops" (the average digital camera these days can easily accommodate 8 or 9 stops), can be saved as an 8-, 16-, 32-, or even 64-bit image, depending on your software, but the dynamic range of the image isn't thereby increased. Only the number of discrete steps from the brightest to the darkest tone in the image has changed. Conversely, a 22-stop scene (way beyond the capacity of a consumer-oriented digital camera without using multiple exposures) can be saved as an 8- or 16-bit image, but the resulting image will exhibit extreme banding (that is, it will display extreme banding in any given tonal range that can actually be displayed on a typical monitor at one time) because of the relatively few available discrete tonal steps from the lightest to the darkest tone in the image.</para>
-          <para><emphasis>In-camera produced jpegs don't need a camera profile</emphasis>. All jpegs (or tiffs, if you have an older Minolta Dimage camera) coming straight out of a camera (even if produced by point-and-shoots cameras that don't allow you to save a raw file) start life inside the camera as a raw file produced by the camera's A to D converter. If you save your images as jpegs, then the processor inside the camera interpolates the raw file, assigns a camera profile, translates the resulting RGB numbers to a working space (usually sRGB but sometimes you can choose AdobeRGB, depending on the camera), does the jpeg compression, and stores the jpeg file on your camera card. So jpegs (or tiffs) from your camera don't need to be assigned a camera profile which is then translated to a working space via a PCS. Jpegs from a camera are already in a working space. </para>
-          <para><emphasis>Useful mathematical information</emphasis> if you are dealing with dcraw's linear gamma output:  Mathematically speaking, when doing a gamma transform you normalize (that is, divide by 256 if you are working with 8-bit values) the RGB numbers and raise the resulting numbers to an appropriate power depending on the respective gammas of the starting and ending color space, then renormalize the results to a new set of RGB numbers. It's not hard, and very instructive, to do this with a calculator for a few sets of RGB numbers spaced from (0,0,0) to (255,255,255) to see how RGB numbers change from one gamma encoding to another. LCMS does this for you when you ask LCMS to convert from one color space to another. However, if ALL you are doing is converting from one color space to the same color space except for a different gamma, use imagemagick instead of LCMS and just manipulate the RGB numbers directly, then assign the new working space to the image - the results will be more accurate than going through a color space transform.</para>
-          <para><emphasis>Copyrighted and copyleft working spaces</emphasis>: I will take it as given that all the ordinarily encountered working spaces, such as: </para>
-          <orderedlist>
-              <listitem><para>the several variants of sRGB (see color.org) </para></listitem>
-              <listitem><para>BruceRGB or BestRGB </para></listitem>
-              <listitem><para>the various ECI (European color initiative) working space profiles  </para></listitem>
-              <listitem><para>AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and Adobe ProPhoto are the same, just branded differently) and their non-branded, non-copyrighted counterparts (Oyranos includes a non-branded version of AdobeRGB) </para></listitem>
-          </orderedlist>
-          <para>and quite a few other working spaces that could be added to this list, are all more or less suitable as working spaces. Which working space you should use depends only and solely on you, on your requirements as the editor of your digital images with your eventual output intentions (web, fine art print, etc). However, as a critical aside, if you are using Adobe or other copyrighted working space profiles, these profiles contain copyright information that shows up in your image exif information. Lately I've been perusing the openicc mailing lists. Apparently LCMS can be used to produce nonbranded, copyleft working space profiles that are just the same as - actually indistinguishable from - the branded, copyrighted working space profiles. It would be a wonderful addition to &digikam; if a set of "copyleft" working space profiles, including nonbranded, relabelled versions of ProPhotoRGB, AdobeRGB, and Adobe WidegamutRGB (perhaps in two flavors each: linear gamma and the usual gamma), could be bundled as part of the &digikam; package. </para>
-     </sect2>
-
-     <sect2 id="editor-cm-connection"> <title>The Color Space Connections</title>
-
-         <para>So the question for each RGB trio of values in the (let us assume) 16-bit tiff produced by dcraw becomes, "What does a particular trio of RGB values for the pixels making up images produced by this particular (make and model) camera really mean in terms of some absolute standard referencing some ideal observer". This absolute standard referencing an ideal observer is more commonly called a <emphasis>Profile Connection Space</emphasis>.  A camera profile is needed to accurately characterize or describe the response of a given camera's pixels to light entering that camera, so that the RGB values in the output file produced by the raw converter can be translated first into an absolute Profile Connection Space (PCS) and then from the PCS to your chosen working space. As a very important aside, for most of the open source world (including digikam), the software used to translate from the camera profile to the PCS and from the PCS to your chosen working space and eventually to your chosen output space (for printing or perhaps monitor display) is based on lcms (the <ulink url="http://littlecms.com">little color management engine</ulink>).  For what it's worth, my own testing has shown that lcms does more accurate conversions than Adobe's proprietary color conversion engine.  Further, for almost all raw conversion programs, including commercial closed source software such as Adobe Photoshop, the raw conversion is typically based on decoding of the proprietary raw file done by dcraw.  David Coffin, author of dcraw, is the hero of raw conversion - without him we'd all be stuck using the usually windows/mac only proprietary software that comes with our digital cameras.  The dcraw's interpolation algorithms (not to be confused with the aforementioned decoding of the proprietary raw file), which are part of &digikam;  if properly used, produce results equal or superior to commercial, closed source software.  We in the world of linux and open source software are NOT second-class citizens when it comes to digital imaging.  Far from. </para>
-
-         <para>There are two commonly used Profile Connection Spaces - CIELAB and CIEXYZ (see <ulink url="http://en.wikipedia.org/wiki/Color_management">Color management</ulink>, section on color translation, then look up CIELAB and CIEXYZ on wikipedia).  Lcms uses the camera profile to translate the RGB values from the interpolated raw file, that is, the tiff produced by dcraw, into the appropriate Profile Connection Space (usually CIEXYZ - why CIEXYZ? I haven't taken the time to learn). A profile connection space is not itself a working space.  Rather a PCS is an absolute reference space used only for translating from one color space to another - think of a PCS as a Universal Translator for all the color profiles that an image might encounter in the course of its journey from camera raw file to final output:</para>
-         <orderedlist>
-            <listitem><para>Lcms uses the camera profile, also called an input profile, to translate the interpolated dcraw-produced RGB numbers, which only have meaning relative to your (make and model of) camera, to a second set of RGB numbers that only have meaning in the Profile Connection Space.</para></listitem>
-            <listitem><para>Lcms translates the Profile Connection Space RGB numbers to the corresponding numbers in your chosen working space so you can edit your image.  And again, these working space numbers ONLY have meaning relative to a given working space.  The same red, visually speaking, is represented by different trios of RGB numbers in different working spaces; and if you assign the wrong profile the image will look wrong, slightly wrong or very wrong depending on the differences between the two profiles.</para></listitem>
-            <listitem><para>While you are editing your image in your chosen working space, then lcms should translate all the working space RGB numbers back to the PCS, and then over to the correct RGB numbers that enable your monitor (your display device) to give you the most accurate possible display representation of your image as it is being edited.  This translation for display is done on the fly and you should never even notice it happening, unless it doesn't happen correctly - then the displayed image will look wrong, perhaps a little wrong, perhaps really, really, really wrong. </para></listitem>
-            <listitem><para>When you are satisfied that your edited image is ready to share with the world, lcms translates the working space RGB numbers back into the PCS space and out again to a printer color space using a printer profile characterizing your printer/paper combination (if you plan on printing the image) or to sRGB (if you plan on displaying the image on the web or emailing it to friends or perhaps creating a slide-show to play on monitors other than your own).</para></listitem>
-       </orderedlist>
-
-       <para>To back up a little bit and look at the first color profile an image encounters, that is, the camera profile (see (1) immediately above) - dcraw can in fact apply your camera profile for you (dcraw uses lcms internally). But (i)the generating of the tiff composed of the interpolated RGB values derived from the camera raw file, and (ii)the application of the camera profile to the interpolated file, are two very distinct and totally separable (separable in theory and practice for dcraw; in theory only for most raw converters) steps.  The dcraw command line output options "-o 0 [Raw color (unique to each camera)] -4 [16-bit linear] -T [tiff]" tell dcraw to output the RGB numbers from the raw interpolation into a tiff WITHOUT applying a camera input profile (the words in brackets explain the options but should not be entered at the command line). Then, if you truly enjoy working from the command line, you can use the lcms utility tifficc to apply your camera profile yourself.  The advantage of doing so is that you can tell lcms to use high quality conversion (dcraw seems to use the lcms default medium).  The disadvantage, of course, is that applying your camera profile from the command line adds one extra step to your raw workflow.</para>
-
-      <sect3 id="using-iccprofile">   <title>Where to find camera profiles</title>
-
-        <para>So where do we get these elusive and oh-so-necessary camera-specific profiles that we need to translate our interpolated raw files to a working color space? The <ulink url="http://ufraw.sourceforge.net/Colors.html">UFRAW website </ulink> section on color management has a bit of information on where to find ready-made camera profiles.  It's an unfortunate fact of digital imaging that the camera profiles supplied by Canon, Nikon, and the like don't work as well with raw converters other than each camera manufacturer's own proprietary raw converter. Which is why Bibble and Phase One (and Adobe, but ACR hides the Adobe-made profiles inside the program code), for example, have to make their own profiles for all the cameras that they support - keep this proprietary propensity of your camera manufacturer in mind next time you buy a digital camera. </para>
-        <para>But back to finding a camera profile for YOUR camera - the real answer (assuming you don't find a ready-made profile that makes you happy) is to make your own camera profile or have one made for you.  There are quite a few commercial services who provide profiling services (for a fee, of course).  Or you can use LPRof or Argyll to profile your camera yourself.  I haven't yet walked down that road so I cannot speak about how easy or difficult the process of profiling a camera might be.  But I would imagine, knowing how very meticulous the people behind Argyll, LPRof, and lcms are about color management, that making your own camera profile is very do-able and very likely the results will be better than any proprietary profile. After all, Canon (and also Bibble and Phase One for that matter) didn't profile MY camera - they just profiled a camera LIKE mine.</para>
-        <para>Working Spaces:</para>
-
-        <para>So now your raw file has been interpolated by dcraw and you've obtained a camera profile and used lcms tifficc to apply your camera profile to the tiff produced by dcraw (or you've asked dcraw to apply it for you).  What does all this mean?  The real answer involves a lot of math and color science that goes way over my head and likely yours.  The short, practical answer is that neither the camera profile space nor the Profile Connection Space is an appropriate space for image editing.  Your next step is to choose a working space for image editing.  And then you (or rather the lcms color management engine that your open source digital imaging software uses) actually perform a double translation. First lcms uses the camera profile to translate the RGB values of each pixel in the dcraw-output-image-without-camera-profile-applied into the aforementioned Profile Connection Space. Then it translates the RGB values of each pixel from the PCS to your chosen working space.</para>
-
-        <para>Confusions and confusing terminology:</para>
-        <para> Before talking more about working spaces, some confusions and confusing terminology needs to be cleared up: </para>
-        <para>First, sRGB is both a working color space and an output color space for images intended for the web and for monitor display (if you have a spiffy new monitor with a gamut larger than the gamut covered by sRGB, obviously you might want to reconsider what output profile to use to best take advantage of your wonderful and hopefully calibrated and profiled monitor, but please convert your image to sRGB before sending it on to your friends!).  sRGB is also the color space that a lot of home and mass-production commercial printers expect image files to be in when sent to the printer.  It is also the color space that most programs assume if an image does not have an embedded color profile telling the program what color space should be used to interpret (translate) the RGB numbers. So if you choose to not use color-management, your color-management choices are simple - set everything to sRGB. </para>
-        <para>Second, all jpegs (or tiffs, if you have an older Minolta Dimage camera) coming straight out of a camera (even if produced by point-and-shoots cameras that don't allow you to save a raw file) start life inside the camera as a raw file produced by the camera's A to D converter.  The processor inside the camera interpolates the raw file, assigns a camera profile, translates the resulting RGB numbers to a working space (usually sRGB but sometimes you can choose AdobeRGB, depending on the camera), does the jpeg compression, and stores the jpeg file on your camera card.  So jpegs (or tiffs) from your camera NEVER need to be assigned a camera or input profile which is then translated to a working space via a PCS. Jpegs from a camera are already in a working space.</para>
-        <para>Third, in case anyone is unsure on this point, note that an interpolated raw file is no longer a raw file - it has been interpolated and then output as a tiff whose RGB values need to be translated to a working space, using the camera profile, the PCS, and lcms. Fourth (strictly for future reference), to introduce a bit of commonly heard color-management terminology here - the camera profile and your printer's color profile are both device dependent, whereas the working space will be device-independent - it can be used with any image, with any properly color-managed software, without regard for where the image originated.</para>
-        <para>Fifth, above I have used the words translate and translation as a descriptive metaphor for what lcms does when it translates RGB values from one color space to another via the PCS. The usual and correct terminology is convert and conversion, which I will use below. The four methods of conversion from one color space to another are: perceptual, relative colorimetric, absolute colorimetric, and saturation.  Which method of conversion you should use for any given image processing step from raw file to final output image is beyond the scope of this tutorial. The standard advice is: when in doubt, use perceptual.</para>
-        <para>Sixth (and again, strictly for future reference), assign a profile means change the meaning of the RGB numbers in an image by embedding a new profile without changing the actual RGB numbers associated with each pixel in the image; convert means embed a new profile, but also change the RGB numbers at the same time so that the meaning of the RGB values - that is, the real-world visible color represented by the trio of RGB numbers associated with each pixel in an image - remains the same before and after the conversion from one space to another. You should be able to do multiple conversions of an image from one working space to another, and with a properly color-managed image editor, even though all the RGB numbers in the image will change with each conversion, the image on your screen should look the same (leaving aside the usually unnoticeable small but inevitable changes from accumulated gamut mismatches and mathematical rounding errors). However, every time you assign a new working space profile rather than convert to a new working space, the appearance of the image should more or less drastically change (usually for the worse).</para>
-        <para>Finally, (and this is a crucially important point), color management is NOT only relevant if you shoot raw.  Color management affects every stage of the image processing pipeline, whether you start with a raw file that you, yourself interpolate and translate into a tiff, or if you start with a jpeg or tiff produced by your camera.</para>
-
-        <para>Copyrighted and copyleft working spaces:</para>
-        <para>I will take it as given that ALL the ordinarily encountered working spaces, such as:</para>
-        <orderedlist>
-          <listitem><para>the several variants of sRGB (see <ulink url="http://www.color.org/v4spec.xalter">color.org</ulink>) </para></listitem>
-          <listitem><para><ulink url="http://www.brucelindbloom.com">BruceRGB</ulink></para></listitem>
-          <listitem><para>the various ECI (European color initiative) working space <ulink url="http://www.eci.org/doku.php?id=en:colourstandards:workingcolorspaces">profiles</ulink></para></listitem>
-          <listitem><para>AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and Adobe ProPhoto are the same, just branded differently) and their <ulink url="http://www.behrmann.name/index.php?option=com_content&task=view&id=34&Itemid=68">non-branded, non-copyrighted</ulink> counterparts (Oyranos includes a non-branded version of AdobeRGB</para></listitem>
-          <listitem><para>and quite a few others that could be added to this list are all more or less suitable as working spaces. Which working space you should use depends only and solely on YOU, on YOUR requirements as the editor of YOUR digital images with YOUR eventual output intentions (web, fine art print, etc).</para></listitem>
-        </orderedlist>
-        <para>However, as a critical aside, if you are using Adobe (or other copyrighted) working space profiles, these profiles contain copyright information that shows up in your image exif information.  Lately I've been perusing the openicc mailing lists. Apparently lcms can be used to produce nonbranded, copyleft working space profiles that are just the same as - actually indistinguishable from - the branded, copyrighted working space profiles. It would be a wonderful addition to digikam if a set of "copyleft" working space profiles, including nonbranded, relabelled versions of ProPhotoRGB, AdobeRGB, and Adobe WidegamutRGB (perhaps in two flavors each: linear gamma and the usual gamma), could be bundled as part of the &digikam; package. </para>
-
-        <para>Which working space: gamma</para> 
-        <para>Now, the next question is: which working space should I use? <ulink url="http://en.wikipedia.org/wiki/Color_management#Working_spaces">Wikipedia says: </ulink> <blockquote><para>Working spaces, such as sRGB or Adobe RGB, are color spaces that facilitate good results while editing. For instance, pixels with equal values of RGB should appear neutral. Using a large (gamut) working space will lead to posterization, while using a small working space will lead to clipping. This trade-off is a consideration for the critical image editor</para></blockquote> </para>
-        <para>Well, that quote from wikipedia is about as clear as mud and I don't know if I will be able to explain it more clearly, but I will try. "[P]ixels with equal values of RGB should appear neutral" just means that for any given pixel in an image that has been converted to a suitable working space, if R=G=B you should see grey or black or white on your screen.  </para>
-        <para>I am not aware of a list of other technical requirements for a suitable working space, though undoubtedly someone has produced such a list. But most working space profiles are characterized by: </para>
-        <orderedlist>
-           <listitem><para>RGB primaries which dictate the range of colors, that is, the gamut covered by a given profile</para></listitem>
-           <listitem><para>white point, usually D50 or D65, which dictates the total dynamic range of the working space, from 0,0,0 (total black) to the brightest possible white</para></listitem>
-           <listitem><para>gamma</para></listitem>
-        </orderedlist> 
-        <para>The practical consequences that result from using different RGB primaries, leading to larger or smaller working spaces, are discussed below. The practical consequences for different choices for the working space white point are beyond the scope of this tutorial. Here I will talk a little bit about the practical consequences of the working space gamma (for an excellent article and references, look up gamma on wikipedia).  </para>
-        <para>The gamma of a color profile dictates what power transform needs to take place to properly convert from an image's embedded color profile (perhaps your working color space) to another color profile with a different gamma, such as (i)the display profile used to display the image on the screen or (ii)perhaps to a new working space, or (iii)perhaps from your working space to your printer's color space. </para>
-
-        <tip><para>Mathematically speaking, for a power transform you normalize the RGB numbers and raise the resulting numbers to an appropriate power depending on the respective gammas of the starting and ending color space, then renormalize the results to a new set of RGB numbers. Lcms does this for you when you ask lcms to convert from one color space to another; however, if ALL you are doing is a power transform, use imagemagick instead of lcms and just manipulate the RGB numbers directly - the results will be more accurate.</para></tip>
-
-        <para>One practical consequence of the gamma of a working space is that the higher the gamma, the more tones are available for editing in the shadows, with consequently fewer tones available in the highlights. So theoretically, if you are working on a very dark-toned (low key) image you might want a working space with a higher gamma.  And if you are working on a high key image, say a picture taken in full noon sunlight of a wedding dress with snow as a backdrop, you might want to choose a working space with a lower gamma, so you have more available tonal gradations in the highlights. But in the real world of real image editing, almost everyone uses working spaces with either gamma 1.8 or 2.2.  </para>
-        <para>Some people are trying to standardize on gamma 2.0.  sRGB and LStar-RGB are not gamma-based working spaces. Rather, sRGB uses a <ulink url="http://en.wikipedia.org/wiki/SRGB">hybrid gamma</ulink>, and LStar-RGB uses a luminosity-based tonal response curve instead of a gamma value - see <ulink url="http://www.colormanagement.org/en/workingspaces.html">here</ulink> for more information, and then google around for more in-depth information.  </para>
-        <para>In addition to gamma 1.8 and gamma 2.2 the only other gamma for a working space that gets much mention or use is gamma 1.0, also called linear gamma. <emphasis>Linear gamma</emphasis> is used in HDR (high dynamic range) imaging and also if one wants to avoid introducing gamma-induced errors into one's regular low dynamic range editing. Gamma-induced errors is a topic outside the scope of this tutorial, but see <ulink url="http://www.4p8.com/eric.brasseur/gamma.html">Gamma errors</ulink> in <ulink url="http://www.21stcenturyshoebox.com/essays/color_reproduction.html">picture scaling,</ulink> for gamma-induced color shifts. </para>
-        <para>Unfortunately and despite their undeniable mathematical advantages, linear gamma working spaces have so few tones in the shadows that (in my opinion) they are impossible to use for editing if one is working in 8-bits, and still problematic at 16-bits.  When the day comes when we are all doing our editing on 32-bit files produced by our HDR cameras on our personal supercomputers, I predict that we will all be using working spaces with gamma 1; Adobe Lightroom is already using a linear gamma working space "under the hood" and Lightzone has always used a linear gamma working space. </para>
-
-        <para>Which working space: <emphasis>large gamut</emphasis> or <emphasis>small gamut</emphasis></para>
-        <para>One MAJOR consideration in choosing a working space is that some working spaces are bigger than others, meaning they cover more of the visible spectrum (and perhaps even include some imaginary colors - mathematical constructs that don't really exist).  These bigger spaces offer the advantage of allowing you to keep all the colors captured by your camera and preserved by the lcms conversion from your camera profile to the really big profile connection space. </para>
-        <para>But keeping all the possible colors comes at a price.  It seems that any given digital image (pictures of daffodils with saturated yellows being one common exception) likely only contains a small subset of all the possible visible colors that your camera is capable of capturing.  This small subset is easily contained in one of the smaller working spaces.  Using a very large working space mean that editing your image (applying curves, saturation, etc) can easily produce colors that your eventual output device (printer, monitor) simply cannot display.  So the conversion from your working space to your output device space (say your printer) will have to remap the out of gamut colors in your edited image, some of which might even be totally imaginary, to your printer color space with its much smaller gamut, leading to inaccurate colors at best and at worst to banding (posterization - gaps in what should be a smooth color transition, say, across an expanse of blue sky) and clipping (your carefully crafted muted transitions across delicate shades of red, for example, might get remapped to a solid block of dull red after conversion to your printer's color space).</para>
-        <para>In other words, large gamut working spaces, improperly handled, can lead to lost information on output. Small gamut working spaces can clip information on input. Like Wikipedia says, it's a trade-off. Here is some oft-repeated advice:</para>
-        <orderedlist>
-          <listitem><para>For images intended for the web, use (one of the) sRGB (variants - there are several).  </para></listitem>
-          <listitem><para>For the most accuracy in your image editing (that is, making the most of your "bits" with the least risk of banding or clipping when you convert your image from your working space to an output space), use the smallest working space that includes all the colors in the scene that you photographed, plus a little extra room for those new colors you intentionally produce as you edit.  </para></listitem>
-          <listitem><para>If you are working in 8-bits rather than 16-bits, choose a smaller space rather than a larger space.</para></listitem>
-          <listitem><para>For archival purposes, convert your raw file to a 16-bit tiff with a large gamut working space to avoid loosing color information. Then convert this archival tiff to your working space of choice (saving the converted working tiff under a new name, of course). See <ulink url="http://simon.tindemans.eu/essays/scenereferredworkflow">here</ulink> for more details.</para></listitem>
-        </orderedlist>
-
-        <para>The WHYs of these bits of advice regarding which working space are beyond the scope of this tutorial.  See Bruce Lindbloom's excellent website (<ulink url="http://www.brucelindbloom.com/">Info, Information about RGB Working Spaces</ulink>) for a visual comparison of the gamut (array of included colors) of the various working color spaces.  See <ulink url="http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml">here</ulink> and <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">here</ulink> for a pro and con presentation, respectively, of the merits of using large gamut working spaces. And while you are on the <ulink url="http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm">cambridgeincolour.com</ulink> website, check out the tutorial on color management. </para>
-    </sect3>
-
-            <sect3 id="softproofing"> <title>Soft Proofing</title>
-                <para> Soft Proofing is a way of previewing on the screen (monitor) the result to be expected from an output on another device, typically a printer. Soft proofing will show you the difference to be expected before you actually do it (and waste your costly ink). So you can improve your settings without wasting time and money.
-                </para>
-
-            </sect3>
-
-            <sect3 id="rendering-intention"> <title>Rendering intention</title>
-
-                <para>Rendering intent refers to the way gamuts are handled when the intended target color space cannot handle the full gamut.</para>
-                <itemizedlist>
-
-                    <listitem><para><emphasis>Perceptual</emphasis>, also called Image or Maintain Full Gamut.  This is generally recommended for photographic images. The color gamut is expanded or compressed when moving between color spaces to maintain consistent overall appearance. Low saturation colors are changed very little. More saturated colors within the gamuts of both spaces may be altered to differentiate them from saturated colors outside the smaller gamut space. Perceptual rendering applies the same gamut compression to all images, even when the image contains no significant out-of-gamut colors.</para>
-                    </listitem>
-
-                    <listitem><para><emphasis>Relative Colorimetric</emphasis>, also called Proof or Preserve Identical Color and White Point.  Reproduces in-gamut colors exactly and clips out-of-gamut colors to the nearest reproducible hue.</para>
-                    </listitem>
-
-                    <listitem><para><emphasis>Absolute Colorimetric</emphasis>, also called Match or Preserve Identical Colors.  Reproduces in-gamut colors exactly and clips out-of-gamut colors to the nearest reproducible hue, sacrificing saturation and possibly lightness. On tinted papers, whites may be darkened to keep the hue identical to the original. For example, cyan may be added to the white of a cream-colored paper, effectively darkening the image. Rarely of interest to photographers.</para>
-                    </listitem>
-
-                    <listitem><para><emphasis>Saturation</emphasis>, also called Graphic or Preserve Saturation.  Maps the saturated primary colors in the source to saturated primary colors in the destination, neglecting differences in hue, saturation, or lightness. For block graphics; rarely of interest to photographers.</para>
-                    </listitem>
-
-                </itemizedlist>
-
-           </sect3>
-
-           <sect3 id="iccprofile-links"> <title>Links</title>
-               <itemizedlist>
-               <listitem><para><ulink url="http://www.oyranos.org/wiki/index.php?title=Main_Page">Color wiki</ulink></para></listitem>
-               <listitem><para><ulink url="http://en.wikipedia.org/wiki/CIELAB">CIELab</ulink></para></listitem>
-               <listitem><para><ulink url="http://en.wikipedia.org/wiki/Gamut">Gamut explained</ulink></para></listitem>
-               </itemizedlist>
-           </sect3>
-
-      </sect2>
+ <sect1 id="editor-cm">
+     <title>RAW File Treatment and Color Management</title>
+
+     &doc-editor-cm-intro;
+     &doc-editor-cm-definitions;
+     &doc-editor-cm-connection;
+     &doc-editor-cm-wkspace;
+     &doc-editor-cm-srgb;
+     &doc-editor-cm-monitor;
+     &doc-editor-cm-rawfile;
+     &doc-editor-cm-pcs;
+     &doc-editor-cm-rendering;
 
  </sect1>
 
diff --git a/digikam/index.docbook b/digikam/index.docbook
index 045ae82..41cfa28 100644
--- a/digikam/index.docbook
+++ b/digikam/index.docbook
@@ -89,6 +89,15 @@
   <!ENTITY doc-editor-decorate-inserttext      SYSTEM "editor-decorate-inserttext.docbook">
 
   <!ENTITY doc-editor-cm                       SYSTEM "editor-cm.docbook">
+  <!ENTITY doc-editor-cm-intro                 SYSTEM "editor-cm-intro.docbook">
+  <!ENTITY doc-editor-cm-definitions           SYSTEM "editor-cm-definitions.docbook">
+  <!ENTITY doc-editor-cm-connection            SYSTEM "editor-cm-connection.docbook">
+  <!ENTITY doc-editor-cm-wkspace               SYSTEM "editor-cm-wkspace.docbook">
+  <!ENTITY doc-editor-cm-srgb                  SYSTEM "editor-cm-srgb.docbook">
+  <!ENTITY doc-editor-cm-monitor               SYSTEM "editor-cm-monitor.docbook">
+  <!ENTITY doc-editor-cm-rawfile               SYSTEM "editor-cm-rawfile.docbook">
+  <!ENTITY doc-editor-cm-pcs                   SYSTEM "editor-cm-pcs.docbook">
+  <!ENTITY doc-editor-cm-rendering             SYSTEM "editor-cm-rendering.docbook">
 
 <!-- Tools Chapter -->
 
diff --git a/showfoto/index.docbook b/showfoto/index.docbook
index 6d14f1f..da926d9 100644
--- a/showfoto/index.docbook
+++ b/showfoto/index.docbook
@@ -62,6 +62,15 @@
   <!ENTITY doc-editor-decorate-inserttext      SYSTEM "../digikam/editor-decorate-inserttext.docbook">
 
   <!ENTITY doc-editor-cm                       SYSTEM "../digikam/editor-cm.docbook">
+  <!ENTITY doc-editor-cm-intro                 SYSTEM "../digikam/editor-cm-intro.docbook">
+  <!ENTITY doc-editor-cm-definitions           SYSTEM "../digikam/editor-cm-definitions.docbook">
+  <!ENTITY doc-editor-cm-connection            SYSTEM "../digikam/editor-cm-connection.docbook">
+  <!ENTITY doc-editor-cm-wkspace               SYSTEM "../digikam/editor-cm-wkspace.docbook">
+  <!ENTITY doc-editor-cm-srgb                  SYSTEM "../digikam/editor-cm-srgb.docbook">
+  <!ENTITY doc-editor-cm-monitor               SYSTEM "../digikam/editor-cm-monitor.docbook">
+  <!ENTITY doc-editor-cm-rawfile               SYSTEM "../digikam/editor-cm-rawfile.docbook">
+  <!ENTITY doc-editor-cm-pcs                   SYSTEM "../digikam/editor-cm-pcs.docbook">
+  <!ENTITY doc-editor-cm-rendering             SYSTEM "../digikam/editor-cm-rendering.docbook">
 
   <!ENTITY doc-tool-acquireimages              SYSTEM "../digikam/tool-acquireimages.docbook">
   <!ENTITY doc-tool-presentation               SYSTEM "../digikam/tool-presentation.docbook">


More information about the kde-doc-english mailing list