[PATCH] Request review for KSharedDataCache (Round 2)

Oswald Buddenhagen ossi at kde.org
Sun Apr 25 20:37:43 BST 2010


On Sat, Apr 24, 2010 at 11:05:07PM -0400, Michael Pyne wrote:
> +class KImageCache::Private
> +{
> +    bool enableCaching;
>
enablePixmapCaching

> +QPixmap KImageCache::findPixmap(const QString &key) const
>
this should also automatically insert into the pixmap cache.

> +    bool insertImage(const QString &key, const QImage &image);
> +    /**
> +     * Inserts the @p image into the shared cache, accessible with @p key. This
> +     * variant is preferred over insertPixmap() if your source data is already a
> +     * QImage, if it is essentially that the image be in shared memory (such as
                                    ^^
grammar

> +    /**
> +     * Enables or disables local pixmap caching. If it is anticipated that a pixmap
> +     * will be frequently needed then this can actually save memory overall since the
> +     * X server or graphics card will not have to store duplicate copies of the same
> +     * image.
> +     *
> +     * @param enable Enables pixmap caching if true, disables otherwise.
> +     */
> +    void setPixmapCaching(bool enable);
> +
a somewhat bizarre workaround for the brokeness of KPixmapCache, if you
ask me. :D
but whatever.

> +#define KSHAREDDATACACHE_H

> +class QAbstractItemModel;
>
obsolete

> +class KDECORE_EXPORT KSharedDataCache
> +{
> +public:

> +    /// Used to give a general idea of expected item size when constructing
> +    /// the shared cache.
> +    enum ItemSize
> +    {
> +        /// Items of less than 4 kilobytes of size on average.
> +        SmallItemSize,
> +        /// Items of between 4 kilobytes to 64 kilobytes of size on average.
> +        MediumItemSize,
> +        /// Items larger than 64 kilobytes of size on average.
> +        LargeItemSize
> +    };
> +
this is unwise.
- unnecessary restriction of possible ranges
- from the user's pov arbitrary quantizaation
- no actual advantage api-wise

i.e., it directly reflects the implementation, not a desired api.

> +    enum EvictionPolicy
> +    {
> +        // It's important that the default value be 0 due to mmap().
>
grammar

> +    /**
> +     * Sets the entry removal policy for the shared cache to
> +     * @p newPolicy. The default is EvictionPolicy::NoRemovalPreference.
>
outdated enum value reference

> +     * Note that even if the insert was successful that the newly added entry
>
                                                      ^^^^
comma instead? (not that i actually knew comma rules, but it feels less
wrong :D)

> +    unsigned cacheSize() const;
> +    unsigned cacheFree() const;
> +
maybe totalSize() and freeSize() instead.

> +// =========================================================================
> +// Description of the cache:
> +//
> +// The page
> +// table contains shared memory split into fixed-size pages (the exact page
> +// size is actually configurable at run-time, to allow for balancing the
> +// demands of index/page table sizes and fragmentation.
                                                         ^
missing closing paren. that's usually a good sign of overusing them -
hyphens and periods often do wonders.

> +struct indexTableEntry
> +struct pageTableEntry
>
even internal structures should be CamelCased.

> +struct sharedPage
> +{
> +    // The actual struct is much larger in size, but since the size varies we
> +    // just use a 1-length array here for a placeholder.
> +    char data[1];
> +};
> +
i find it a bit pointless to define a struct for that. it just adds
casts.

> +// This is effectively the layout of the shared memory segment. The variables
> +// contained within form the header, data contained afterwards is pointed to
> +// by using special accessor functions.
> +struct SharedMemory
> +{
> +    static const uint PIXMAP_CACHE_VERSION = 1;
> +    static const uint MINIMUM_CACHE_SIZE = 4096;
> +
while the values are actually inlined in optimized builds, the constant
is still emitted as a memory location. so i prefer enums for constants. 

> +    mutable pthread_mutex_t lockMutex;
> +    QAtomicInt ready; ///< DO NOT INITIALIZE
> +    uint       cacheSize;
> +    uint       cacheAvail;

> +    quint8     version;
>
it would be wise to put that field first and define that any version
upgrades may not change that.
ready should be the second field. no, actually, it should be the first
and fixed as well, as it is used even prior to checking the version.

> +        // These must be updated to make some of our auxiliary functions
> +        // work right since their values will be based on the cache size.
> +        cacheSize = _cacheSize;
> +        pageSize = _pageSize;
> +        version = PIXMAP_CACHE_VERSION;
> +


> This doesn't do perfect
> +        // defragmentation since there may be space at the beginning, but it
> +        // should result in a minimum of memcpy.
> +
>
the saving from that is likely to be minimal, as the first hole is
likely to be found soon.
but it may create the situation of needing to free up more pages and
defragment again right after defragmentation, which you are handing with
extra code.

> +            if (curIndex < 0) {
> +                kWarning(264) << "Unable to remove enough used pages to allocate"
> +                              << numberNeeded << "pages. In theory the cache is empty, weird.";
kError() then

> +            pagesRemoved += intCeil(indexTable()[curIndex].totalItemSize, cachePageSize());
>
this parallel data maintenance looks weird.
why not use the free page count directly?

> +    bool lock() const
> +    {
> +        struct timespec timeout;
> +        timeout.tv_sec = 1;
>
too optimistic. see next.

> +        while (shm->ready != 2) {
> +            busyCount++;
> +            if (++busyCount >= 20) {
> +                // Didn't acquire within .1 seconds?  Assume an issue exists
>
way too optimistic. a little swapping can easily throw this off.
10 seconds with an exponential delay time growth may work best.

i'm sick an tired, so this is not a perfectly thorough review. :)





More information about the kde-core-devel mailing list