[neon/backports-focal/libavif/Neon/release] /: New upstream version 0.9.3

Boyuan Yang null at kde.org
Fri Oct 29 12:05:05 BST 2021


Git commit 6619258aace13f21df215fac07dd5c9b03f337af by Boyuan Yang.
Committed on 24/10/2021 at 23:59.
Pushed by jriddell into branch 'Neon/release'.

New upstream version 0.9.3

M  +1    -1    .github/workflows/ci.yml
M  +43   -4    CHANGELOG.md
M  +4    -4    CMakeLists.txt
M  +106  -41   apps/avifdec.c
M  +1    -1    apps/avifenc.c
M  +7    -2    apps/shared/avifpng.c
M  +5    -1    apps/shared/avifpng.h
M  +5    -4    apps/shared/avifutil.c
M  +1    -1    apps/shared/avifutil.h
M  +7    -5    apps/shared/y4m.c
M  +1    -1    ext/aom.cmd
M  +2    -1    ext/dav1d.cmd
A  +17   -0    ext/dav1d_oss_fuzz.patch
C  +2    -1    ext/dav1d_oss_fuzz.sh [from: ext/dav1d.cmd - 089% similarity]
M  +87   -30   include/avif/avif.h
M  +20   -9    include/avif/internal.h
M  +3    -3    libavif.pc.cmake
M  +14   -0    src/avif.c
M  +164  -61   src/codec_aom.c
M  +20   -5    src/codec_dav1d.c
M  +12   -2    src/codec_libgav1.c
M  +563  -157  src/read.c
M  +5    -0    src/reformat.c
A  +150  -0    src/scale.c     [License: BSD]
M  +150  -31   src/write.c
M  +1296 -1296 tests/data/tests.json
M  +2    -2    tests/docker/build.sh
M  +20   -5    tests/oss-fuzz/avif_decode_fuzzer.cc
M  +3    -0    tests/testcase.c

https://invent.kde.org/neon/backports-focal/libavif/commit/6619258aace13f21df215fac07dd5c9b03f337af

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index fa4a701..287056e 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -60,4 +60,4 @@ jobs:
     - name: Run AVIF Tests (on Linux)
       if: runner.os == 'Linux'
       working-directory: ./build
-      run: ./aviftest ../tests/data
+      run: ./aviftest ../tests/data --io-only
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 549eb8f..b341616 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,14 +6,53 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
 
 ## [Unreleased]
 
+## [0.9.3] - 2021-10-20
+
+### Added
+* Support for progressive AVIFs and operating point selection
+* Add automatic tile scaling to the item's ispe or track's dims
+* Add diagnostic messages for AV1 decode failures
+* avifdec: Add PNG compression level arg
+* Make image size limit configurable, expose to avifdec
+* Add the AVIF_STRICT_ALPHA_ISPE_REQUIRED flag
+
+### Changed
+* Mandate ispe and disallow zero width or height (#640).
+* Re-map libavif speed 7-10 to libaom speed 7-9 (#682)
+*  Refer to https://aomedia-review.googlesource.com/c/aom/+/140624
+*  If you were using libaom with the following avif speed setting:
+*   - speed 0-6: no change is needed
+*   - speed 7:   change to speed 6 for the same results
+*   - speed 8-9: re-test and re-adjust speed according to your app needs
+* Update aom.cmd: v3.2.0
+* Update dav1d.cmd: 0.9.2
+* Pass TestCase's minQuantizer, maxQuantizer, speed to encoder.
+* Regenerate tests.json
+* Disable JSON-based tests for now, the metrics are inconsistent/unreliable
+* Set diagnostic message for aom_codec_set_option()
+* Re-map libavif-libaom speed settings (#682)
+* Bump of version in CMakeLists.txt was forgotten
+* avifdec: Better message for unsupported file extension
+* Do not copy input image when encoding with libaom unless width or height is 1
+* Fix the comment for AVIF_STRICT_PIXI_REQUIRED
+* Update libavif.pc.cmake (#692)
+* In 32-bit builds set dav1d's frame_size_limit setting to 8192*8192
+* Allocate alpha alongside YUV (if necessary) during y4m decode to avoid incorrect alphaRowBytes math
+* Change avif_decode_fuzzer to be more like Chrome
+* Update codec_dav1d.c for the new threading model
+* Generalized ipco property deduplication
+* Rename avifParseMoovBox to avifParseMovieBox for consistency
+* Simplify idat storage for avifMeta structure (#756)
+* Fix oss-fuzz coverage build failure of dav1d
+* Redesign AVIF_DECODER_SOURCE_AUTO to honor the FileTypeBox's major brand
+* Use "C420" as default Y4M color space parameter
+
 ## [0.9.2] - 2021-06-23
 
 ### Added
-
 * avifenc, avifdec: Allow "-j all" to automatically use all of the cores on the machine (#670)
 
 ### Changed
-
 * Refactor imir implementation to match HEIF Draft Amendment 2 (#665)
 * Merge avifCodec's open call with its getNextImage call to avoid codec init during parse, and simplify the codec API (#637)
 * Update aom.cmd: v3.1.1 (#674)
@@ -44,7 +83,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
 * Declare the param of avifDumpDiagnostics as const (#633)
 * Adjust gdk-pixbuf loader for new API change (#668)
 * Fix gdk-pixbuf loader install path (#615)
-* Don't need to disable MSVC warnings 5031 and 5032 (#681)
 
 ## [0.9.1] - 2021-05-19
 
@@ -664,7 +702,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
 - Constants `AVIF_VERSION`, `AVIF_VERSION_MAJOR`, `AVIF_VERSION_MINOR`, `AVIF_VERSION_PATCH`
 - `avifVersion()` function
 
-[Unreleased]: https://github.com/AOMediaCodec/libavif/compare/v0.9.2...HEAD
+[Unreleased]: https://github.com/AOMediaCodec/libavif/compare/v0.9.3...HEAD
+[0.9.3]: https://github.com/AOMediaCodec/libavif/compare/v0.9.2...v0.9.3
 [0.9.2]: https://github.com/AOMediaCodec/libavif/compare/v0.9.1...v0.9.2
 [0.9.1]: https://github.com/AOMediaCodec/libavif/compare/v0.9.0...v0.9.1
 [0.9.0]: https://github.com/AOMediaCodec/libavif/compare/v0.8.4...v0.9.0
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 7d78025..7182df7 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -7,7 +7,7 @@ cmake_minimum_required(VERSION 3.5)
 # and find_package()
 list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules")
 
-project(libavif LANGUAGES C VERSION 0.9.1)
+project(libavif LANGUAGES C VERSION 0.9.3)
 
 # Set C99 as the default
 set(CMAKE_C_STANDARD 99)
@@ -19,7 +19,7 @@ set(CMAKE_C_STANDARD 99)
 #     Increment MINOR. Set PATCH to 0
 #   If the source code was changed, but there were no interface changes:
 #     Increment PATCH.
-set(LIBRARY_VERSION_MAJOR 12)
+set(LIBRARY_VERSION_MAJOR 13)
 set(LIBRARY_VERSION_MINOR 0)
 set(LIBRARY_VERSION_PATCH 0)
 set(LIBRARY_VERSION "${LIBRARY_VERSION_MAJOR}.${LIBRARY_VERSION_MINOR}.${LIBRARY_VERSION_PATCH}")
@@ -206,6 +206,7 @@ set(AVIF_SRCS
     src/read.c
     src/reformat.c
     src/reformat_libyuv.c
+    src/scale.c
     src/stream.c
     src/utils.c
     src/write.c
@@ -554,8 +555,6 @@ if(AVIF_BUILD_TESTS)
     endif()
 endif()
 
-configure_file(libavif.pc.cmake ${CMAKE_CURRENT_BINARY_DIR}/libavif.pc @ONLY)
-
 if(NOT SKIP_INSTALL_LIBRARIES AND NOT SKIP_INSTALL_ALL)
     install(TARGETS avif
         EXPORT ${PROJECT_NAME}-config
@@ -577,6 +576,7 @@ if(NOT SKIP_INSTALL_LIBRARIES AND NOT SKIP_INSTALL_ALL)
                 DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME})
     endif()
 
+    configure_file(libavif.pc.cmake ${CMAKE_CURRENT_BINARY_DIR}/libavif.pc @ONLY)
     install(FILES ${CMAKE_CURRENT_BINARY_DIR}/libavif.pc
             DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)
 endif()
diff --git a/apps/avifdec.c b/apps/avifdec.c
index 81df2cb..9e74504 100644
--- a/apps/avifdec.c
+++ b/apps/avifdec.c
@@ -33,12 +33,18 @@ static void syntax(void)
     printf("    -c,--codec C      : AV1 codec to use (choose from versions list below)\n");
     printf("    -d,--depth D      : Output depth [8,16]. (PNG only; For y4m, depth is retained, and JPEG is always 8bpc)\n");
     printf("    -q,--quality Q    : Output quality [0-100]. (JPEG only, default: %d)\n", DEFAULT_JPEG_QUALITY);
+    printf("    --png-compress L  : Set PNG compression level (PNG only; 0-9, 0=none, 9=max). Defaults to libpng's builtin default.\n");
     printf("    -u,--upsampling U : Chroma upsampling (for 420/422). automatic (default), fastest, best, nearest, or bilinear\n");
     printf("    -r,--raw-color    : Output raw RGB values instead of multiplying by alpha when saving to opaque formats\n");
     printf("                        (JPEG only; not applicable to y4m)\n");
+    printf("    --index           : When decoding an image sequence or progressive image, specify which frame index to decode (Default: 0)\n");
+    printf("    --progressive     : Enable progressive AVIF processing. If a progressive image is encountered and --progressive is passed,\n");
+    printf("                        avifdec will use --index to choose which layer to decode (in progressive order).\n");
     printf("    --no-strict       : Disable strict decoding, which disables strict validation checks and errors\n");
     printf("    -i,--info         : Decode all frames and display all image information instead of saving to disk\n");
     printf("    --ignore-icc      : If the input file contains an embedded ICC profile, ignore it (no-op if absent)\n");
+    printf("    --size-limit C    : Specifies the image size limit (in total pixels) that should be tolerated.\n");
+    printf("                        Default: %u, set to a smaller value to further restrict.\n", AVIF_DEFAULT_IMAGE_SIZE_LIMIT);
     printf("\n");
     avifPrintVersions();
 }
@@ -50,12 +56,16 @@ int main(int argc, char * argv[])
     int requestedDepth = 0;
     int jobs = 1;
     int jpegQuality = DEFAULT_JPEG_QUALITY;
+    int pngCompressionLevel = -1; // -1 is a sentinel to avifPNGWrite() to skip calling png_set_compression_level()
     avifCodecChoice codecChoice = AVIF_CODEC_CHOICE_AUTO;
     avifBool infoOnly = AVIF_FALSE;
     avifChromaUpsampling chromaUpsampling = AVIF_CHROMA_UPSAMPLING_AUTOMATIC;
     avifBool ignoreICC = AVIF_FALSE;
     avifBool rawColor = AVIF_FALSE;
+    avifBool allowProgressive = AVIF_FALSE;
     avifStrictFlags strictFlags = AVIF_STRICT_ENABLED;
+    uint32_t frameIndex = 0;
+    uint32_t imageSizeLimit = AVIF_DEFAULT_IMAGE_SIZE_LIMIT;
 
     if (argc < 2) {
         syntax();
@@ -110,6 +120,14 @@ int main(int argc, char * argv[])
             } else if (jpegQuality > 100) {
                 jpegQuality = 100;
             }
+        } else if (!strcmp(arg, "--png-compress")) {
+            NEXTARG();
+            pngCompressionLevel = atoi(arg);
+            if (pngCompressionLevel < 0) {
+                pngCompressionLevel = 0;
+            } else if (pngCompressionLevel > 9) {
+                pngCompressionLevel = 9;
+            }
         } else if (!strcmp(arg, "-u") || !strcmp(arg, "--upsampling")) {
             NEXTARG();
             if (!strcmp(arg, "automatic")) {
@@ -128,12 +146,24 @@ int main(int argc, char * argv[])
             }
         } else if (!strcmp(arg, "-r") || !strcmp(arg, "--raw-color")) {
             rawColor = AVIF_TRUE;
+        } else if (!strcmp(arg, "--progressive")) {
+            allowProgressive = AVIF_TRUE;
+        } else if (!strcmp(arg, "--index")) {
+            NEXTARG();
+            frameIndex = (uint32_t)atoi(arg);
         } else if (!strcmp(arg, "--no-strict")) {
             strictFlags = AVIF_STRICT_DISABLED;
         } else if (!strcmp(arg, "-i") || !strcmp(arg, "--info")) {
             infoOnly = AVIF_TRUE;
         } else if (!strcmp(arg, "--ignore-icc")) {
             ignoreICC = AVIF_TRUE;
+        } else if (!strcmp(arg, "--size-limit")) {
+            NEXTARG();
+            imageSizeLimit = strtoul(arg, NULL, 10);
+            if ((imageSizeLimit > AVIF_DEFAULT_IMAGE_SIZE_LIMIT) || (imageSizeLimit == 0)) {
+                fprintf(stderr, "ERROR: invalid image size limit: %s\n", arg);
+                return 1;
+            }
         } else {
             // Positional argument
             if (!inputFilename) {
@@ -164,7 +194,9 @@ int main(int argc, char * argv[])
         avifDecoder * decoder = avifDecoderCreate();
         decoder->maxThreads = jobs;
         decoder->codecChoice = codecChoice;
+        decoder->imageSizeLimit = imageSizeLimit;
         decoder->strictFlags = strictFlags;
+        decoder->allowProgressive = allowProgressive;
         avifResult result = avifDecoderSetIOFile(decoder, inputFilename);
         if (result != AVIF_RESULT_OK) {
             fprintf(stderr, "Cannot open file for read: %s\n", inputFilename);
@@ -182,17 +214,30 @@ int main(int argc, char * argv[])
                    decoder->durationInTimescales,
                    decoder->imageCount,
                    (decoder->imageCount == 1) ? "" : "s");
-            printf(" * Frames:\n");
+            if (decoder->imageCount > 1) {
+                printf(" * %s Frames: (%u expected frames)\n",
+                       (decoder->progressiveState != AVIF_PROGRESSIVE_STATE_UNAVAILABLE) ? "Progressive Image" : "Image Sequence",
+                       decoder->imageCount);
+            } else {
+                printf(" * Frame:\n");
+            }
 
-            int frameIndex = 0;
-            while (avifDecoderNextImage(decoder) == AVIF_RESULT_OK) {
-                printf("   * Decoded frame [%d] [pts %2.2f (%" PRIu64 " timescales)] [duration %2.2f (%" PRIu64 " timescales)]\n",
-                       frameIndex,
+            int currIndex = 0;
+            avifResult nextImageResult;
+            while ((nextImageResult = avifDecoderNextImage(decoder)) == AVIF_RESULT_OK) {
+                printf("   * Decoded frame [%d] [pts %2.2f (%" PRIu64 " timescales)] [duration %2.2f (%" PRIu64 " timescales)] [%ux%u]\n",
+                       currIndex,
                        decoder->imageTiming.pts,
                        decoder->imageTiming.ptsInTimescales,
                        decoder->imageTiming.duration,
-                       decoder->imageTiming.durationInTimescales);
-                ++frameIndex;
+                       decoder->imageTiming.durationInTimescales,
+                       decoder->image->width,
+                       decoder->image->height);
+                ++currIndex;
+            }
+            if (nextImageResult != AVIF_RESULT_NO_IMAGES_REMAINING) {
+                printf("ERROR: Failed to decode frame: %s\n", avifResultToString(nextImageResult));
+                avifDumpDiagnostics(&decoder->diag);
             }
         } else {
             printf("ERROR: Failed to decode image: %s\n", avifResultToString(result));
@@ -214,52 +259,72 @@ int main(int argc, char * argv[])
            (jobs == 1) ? "" : "s");
 
     int returnCode = 0;
-    avifImage * avif = avifImageCreateEmpty();
     avifDecoder * decoder = avifDecoderCreate();
     decoder->maxThreads = jobs;
     decoder->codecChoice = codecChoice;
+    decoder->imageSizeLimit = imageSizeLimit;
     decoder->strictFlags = strictFlags;
-    avifResult decodeResult = avifDecoderReadFile(decoder, avif, inputFilename);
-    if (decodeResult == AVIF_RESULT_OK) {
-        printf("Image decoded: %s\n", inputFilename);
-        printf("Image details:\n");
-        avifImageDump(avif, 0, 0);
+    decoder->allowProgressive = allowProgressive;
 
-        if (ignoreICC && (avif->icc.size > 0)) {
-            printf("[--ignore-icc] Discarding ICC profile.\n");
-            avifImageSetProfileICC(avif, NULL, 0);
-        }
+    avifResult result = avifDecoderSetIOFile(decoder, inputFilename);
+    if (result != AVIF_RESULT_OK) {
+        fprintf(stderr, "Cannot open file for read: %s\n", inputFilename);
+        returnCode = 1;
+        goto cleanup;
+    }
+
+    result = avifDecoderParse(decoder);
+    if (result != AVIF_RESULT_OK) {
+        fprintf(stderr, "ERROR: Failed to parse image: %s\n", avifResultToString(result));
+        returnCode = 1;
+        goto cleanup;
+    }
+
+    result = avifDecoderNthImage(decoder, frameIndex);
+    if (result != AVIF_RESULT_OK) {
+        fprintf(stderr, "ERROR: Failed to decode image: %s\n", avifResultToString(result));
+        returnCode = 1;
+        goto cleanup;
+    }
 
-        avifAppFileFormat outputFormat = avifGuessFileFormat(outputFilename);
-        if (outputFormat == AVIF_APP_FILE_FORMAT_UNKNOWN) {
-            fprintf(stderr, "Cannot determine output file extension: %s\n", outputFilename);
+    printf("Image decoded: %s\n", inputFilename);
+    printf("Image details:\n");
+    avifImageDump(decoder->image, 0, 0, decoder->progressiveState);
+
+    if (ignoreICC && (decoder->image->icc.size > 0)) {
+        printf("[--ignore-icc] Discarding ICC profile.\n");
+        avifImageSetProfileICC(decoder->image, NULL, 0);
+    }
+
+    avifAppFileFormat outputFormat = avifGuessFileFormat(outputFilename);
+    if (outputFormat == AVIF_APP_FILE_FORMAT_UNKNOWN) {
+        fprintf(stderr, "Cannot determine output file extension: %s\n", outputFilename);
+        returnCode = 1;
+    } else if (outputFormat == AVIF_APP_FILE_FORMAT_Y4M) {
+        if (!y4mWrite(outputFilename, decoder->image)) {
             returnCode = 1;
-        } else if (outputFormat == AVIF_APP_FILE_FORMAT_Y4M) {
-            if (!y4mWrite(outputFilename, avif)) {
-                returnCode = 1;
-            }
-        } else if (outputFormat == AVIF_APP_FILE_FORMAT_JPEG) {
-            // Bypass alpha multiply step during conversion
-            if (rawColor) {
-                avif->alphaPremultiplied = AVIF_TRUE;
-            }
-            if (!avifJPEGWrite(outputFilename, avif, jpegQuality, chromaUpsampling)) {
-                returnCode = 1;
-            }
-        } else if (outputFormat == AVIF_APP_FILE_FORMAT_PNG) {
-            if (!avifPNGWrite(outputFilename, avif, requestedDepth, chromaUpsampling)) {
-                returnCode = 1;
-            }
-        } else {
-            fprintf(stderr, "Unrecognized file extension: %s\n", outputFilename);
+        }
+    } else if (outputFormat == AVIF_APP_FILE_FORMAT_JPEG) {
+        // Bypass alpha multiply step during conversion
+        if (rawColor) {
+            decoder->image->alphaPremultiplied = AVIF_TRUE;
+        }
+        if (!avifJPEGWrite(outputFilename, decoder->image, jpegQuality, chromaUpsampling)) {
+            returnCode = 1;
+        }
+    } else if (outputFormat == AVIF_APP_FILE_FORMAT_PNG) {
+        if (!avifPNGWrite(outputFilename, decoder->image, requestedDepth, chromaUpsampling, pngCompressionLevel)) {
             returnCode = 1;
         }
     } else {
-        printf("ERROR: Failed to decode image: %s\n", avifResultToString(decodeResult));
-        avifDumpDiagnostics(&decoder->diag);
+        fprintf(stderr, "Unsupported output file extension: %s\n", outputFilename);
         returnCode = 1;
     }
+
+cleanup:
+    if (returnCode != 0) {
+        avifDumpDiagnostics(&decoder->diag);
+    }
     avifDecoderDestroy(decoder);
-    avifImageDestroy(avif);
     return returnCode;
 }
diff --git a/apps/avifenc.c b/apps/avifenc.c
index 0ca8ff4..a8c2aaf 100644
--- a/apps/avifenc.c
+++ b/apps/avifenc.c
@@ -1078,7 +1078,7 @@ int main(int argc, char * argv[])
         lossyHint = " (Lossless)";
     }
     printf("AVIF to be written:%s\n", lossyHint);
-    avifImageDump(gridCells ? gridCells[0] : image, gridDims[0], gridDims[1]);
+    avifImageDump(gridCells ? gridCells[0] : image, gridDims[0], gridDims[1], AVIF_PROGRESSIVE_STATE_UNAVAILABLE);
 
     printf("Encoding with AV1 codec '%s' speed [%d], color QP [%d (%s) <-> %d (%s)], alpha QP [%d (%s) <-> %d (%s)], tileRowsLog2 [%d], tileColsLog2 [%d], %d worker thread(s), please wait...\n",
            avifCodecName(codecChoice, AVIF_CODEC_FLAG_CAN_ENCODE),
diff --git a/apps/shared/avifpng.c b/apps/shared/avifpng.c
index 5af38ea..21a0557 100644
--- a/apps/shared/avifpng.c
+++ b/apps/shared/avifpng.c
@@ -121,7 +121,8 @@ avifBool avifPNGRead(const char * inputFilename, avifImage * avif, avifPixelForm
     avif->yuvFormat = requestedFormat;
     if (avif->yuvFormat == AVIF_PIXEL_FORMAT_NONE) {
         // Identity is only valid with YUV444.
-        avif->yuvFormat = (avif->matrixCoefficients == AVIF_MATRIX_COEFFICIENTS_IDENTITY) ? AVIF_PIXEL_FORMAT_YUV444 : AVIF_APP_DEFAULT_PIXEL_FORMAT;
+        avif->yuvFormat = (avif->matrixCoefficients == AVIF_MATRIX_COEFFICIENTS_IDENTITY) ? AVIF_PIXEL_FORMAT_YUV444
+                                                                                          : AVIF_APP_DEFAULT_PIXEL_FORMAT;
     }
     avif->depth = requestedDepth;
     if (avif->depth == 0) {
@@ -160,7 +161,7 @@ cleanup:
     return readResult;
 }
 
-avifBool avifPNGWrite(const char * outputFilename, const avifImage * avif, uint32_t requestedDepth, avifChromaUpsampling chromaUpsampling)
+avifBool avifPNGWrite(const char * outputFilename, const avifImage * avif, uint32_t requestedDepth, avifChromaUpsampling chromaUpsampling, int compressionLevel)
 {
     volatile avifBool writeResult = AVIF_FALSE;
     png_structp png = NULL;
@@ -217,6 +218,10 @@ avifBool avifPNGWrite(const char * outputFilename, const avifImage * avif, uint3
     // It is up to the enduser to decide if they want to keep their ICC profiles or not.
     png_set_option(png, PNG_SKIP_sRGB_CHECK_PROFILE, 1);
 
+    if (compressionLevel >= 0) {
+        png_set_compression_level(png, compressionLevel);
+    }
+
     png_set_IHDR(png, info, avif->width, avif->height, rgb.depth, PNG_COLOR_TYPE_RGBA, PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_DEFAULT, PNG_FILTER_TYPE_DEFAULT);
     if (avif->icc.data && (avif->icc.size > 0)) {
         png_set_iCCP(png, info, "libavif", 0, avif->icc.data, (png_uint_32)avif->icc.size);
diff --git a/apps/shared/avifpng.h b/apps/shared/avifpng.h
index 45c7f75..d3dba8e 100644
--- a/apps/shared/avifpng.h
+++ b/apps/shared/avifpng.h
@@ -8,6 +8,10 @@
 
 // if (requestedDepth == 0), do best-fit
 avifBool avifPNGRead(const char * inputFilename, avifImage * avif, avifPixelFormat requestedFormat, uint32_t requestedDepth, uint32_t * outPNGDepth);
-avifBool avifPNGWrite(const char * outputFilename, const avifImage * avif, uint32_t requestedDepth, avifChromaUpsampling chromaUpsampling);
+avifBool avifPNGWrite(const char * outputFilename,
+                      const avifImage * avif,
+                      uint32_t requestedDepth,
+                      avifChromaUpsampling chromaUpsampling,
+                      int compressionLevel);
 
 #endif // ifndef LIBAVIF_APPS_SHARED_AVIFPNG_H
diff --git a/apps/shared/avifutil.c b/apps/shared/avifutil.c
index c3760a3..27611dd 100644
--- a/apps/shared/avifutil.c
+++ b/apps/shared/avifutil.c
@@ -38,7 +38,7 @@ static void printClapFraction(const char * name, int32_t n, int32_t d)
     printf(", ");
 }
 
-static void avifImageDumpInternal(const avifImage * avif, uint32_t gridCols, uint32_t gridRows, avifBool alphaPresent)
+static void avifImageDumpInternal(const avifImage * avif, uint32_t gridCols, uint32_t gridRows, avifBool alphaPresent, avifProgressiveState progressiveState)
 {
     uint32_t width = avif->width;
     uint32_t height = avif->height;
@@ -105,17 +105,18 @@ static void avifImageDumpInternal(const avifImage * avif, uint32_t gridCols, uin
             printf("    * imir (Mirror)        : Mode %u (%s)\n", avif->imir.mode, (avif->imir.mode == 0) ? "top-to-bottom" : "left-to-right");
         }
     }
+    printf(" * Progressive    : %s\n", avifProgressiveStateToString(progressiveState));
 }
 
-void avifImageDump(avifImage * avif, uint32_t gridCols, uint32_t gridRows)
+void avifImageDump(avifImage * avif, uint32_t gridCols, uint32_t gridRows, avifProgressiveState progressiveState)
 {
     const avifBool alphaPresent = avif->alphaPlane && (avif->alphaRowBytes > 0);
-    avifImageDumpInternal(avif, gridCols, gridRows, alphaPresent);
+    avifImageDumpInternal(avif, gridCols, gridRows, alphaPresent, progressiveState);
 }
 
 void avifContainerDump(avifDecoder * decoder)
 {
-    avifImageDumpInternal(decoder->image, 0, 0, decoder->alphaPresent);
+    avifImageDumpInternal(decoder->image, 0, 0, decoder->alphaPresent, decoder->progressiveState);
 }
 
 void avifPrintVersions(void)
diff --git a/apps/shared/avifutil.h b/apps/shared/avifutil.h
index 57f136a..79e1f3c 100644
--- a/apps/shared/avifutil.h
+++ b/apps/shared/avifutil.h
@@ -19,7 +19,7 @@
 #define AVIF_FMT_ZU "%zu"
 #endif
 
-void avifImageDump(avifImage * avif, uint32_t gridCols, uint32_t gridRows);
+void avifImageDump(avifImage * avif, uint32_t gridCols, uint32_t gridRows, avifProgressiveState progressiveState);
 void avifContainerDump(avifDecoder * decoder);
 void avifPrintVersions(void);
 void avifDumpDiagnostics(const avifDiagnostics * diag);
diff --git a/apps/shared/y4m.c b/apps/shared/y4m.c
index f7270f9..8c80772 100644
--- a/apps/shared/y4m.c
+++ b/apps/shared/y4m.c
@@ -215,9 +215,10 @@ avifBool y4mRead(const char * inputFilename, avifImage * avif, avifAppSourceTimi
     struct y4mFrameIterator frame;
     frame.width = -1;
     frame.height = -1;
-    frame.depth = -1;
+    // Default to the color space "C420" to match the defaults of aomenc and ffmpeg.
+    frame.depth = 8;
     frame.hasAlpha = AVIF_FALSE;
-    frame.format = AVIF_PIXEL_FORMAT_NONE;
+    frame.format = AVIF_PIXEL_FORMAT_YUV420;
     frame.range = AVIF_RANGE_LIMITED;
     frame.chromaSamplePosition = AVIF_CHROMA_SAMPLE_POSITION_UNKNOWN;
     memset(&frame.sourceTiming, 0, sizeof(avifAppSourceTiming));
@@ -338,8 +339,7 @@ avifBool y4mRead(const char * inputFilename, avifImage * avif, avifAppSourceTimi
         goto cleanup;
     }
 
-    if ((frame.width < 1) || (frame.height < 1) || ((frame.depth != 8) && (frame.depth != 10) && (frame.depth != 12)) ||
-        (frame.format == AVIF_PIXEL_FORMAT_NONE)) {
+    if ((frame.width < 1) || (frame.height < 1) || ((frame.depth != 8) && (frame.depth != 10) && (frame.depth != 12))) {
         fprintf(stderr, "Failed to parse y4m header (not enough information): %s\n", frame.displayFilename);
         goto cleanup;
     }
@@ -356,6 +356,9 @@ avifBool y4mRead(const char * inputFilename, avifImage * avif, avifAppSourceTimi
     avif->yuvRange = frame.range;
     avif->yuvChromaSamplePosition = frame.chromaSamplePosition;
     avifImageAllocatePlanes(avif, AVIF_PLANES_YUV);
+    if (frame.hasAlpha) {
+        avifImageAllocatePlanes(avif, AVIF_PLANES_A);
+    }
 
     avifPixelFormatInfo info;
     avifGetPixelFormatInfo(avif->yuvFormat, &info);
@@ -378,7 +381,6 @@ avifBool y4mRead(const char * inputFilename, avifImage * avif, avifAppSourceTimi
         }
     }
     if (frame.hasAlpha) {
-        avifImageAllocatePlanes(avif, AVIF_PLANES_A);
         if (fread(avif->alphaPlane, 1, planeBytes[3], frame.inputFile) != planeBytes[3]) {
             fprintf(stderr, "Failed to read y4m plane (not enough data): %s\n", frame.displayFilename);
             goto cleanup;
diff --git a/ext/aom.cmd b/ext/aom.cmd
index 59c0d2f..12e62d6 100755
--- a/ext/aom.cmd
+++ b/ext/aom.cmd
@@ -8,7 +8,7 @@
 : # If you're running this on Windows, be sure you've already run this (from your VC2019 install dir):
 : #     "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvars64.bat"
 
-git clone -b v3.1.1 --depth 1 https://aomedia.googlesource.com/aom
+git clone -b v3.2.0 --depth 1 https://aomedia.googlesource.com/aom
 
 cd aom
 mkdir build.libavif
diff --git a/ext/dav1d.cmd b/ext/dav1d.cmd
index 958a2f9..d8b9a17 100755
--- a/ext/dav1d.cmd
+++ b/ext/dav1d.cmd
@@ -8,7 +8,8 @@
 : # If you're running this on Windows, be sure you've already run this (from your VC2019 install dir):
 : #     "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvars64.bat"
 
-git clone -b 0.9.0 --depth 1 https://code.videolan.org/videolan/dav1d.git
+# When updating the dav1d version, make the same change to dav1d_oss_fuzz.sh.
+git clone -b 0.9.2 --depth 1 https://code.videolan.org/videolan/dav1d.git
 
 cd dav1d
 mkdir build
diff --git a/ext/dav1d_oss_fuzz.patch b/ext/dav1d_oss_fuzz.patch
new file mode 100644
index 0000000..888bd55
--- /dev/null
+++ b/ext/dav1d_oss_fuzz.patch
@@ -0,0 +1,17 @@
+diff --git a/meson.build b/meson.build
+index 1bf69ab..1a7c409 100644
+--- a/meson.build
++++ b/meson.build
+@@ -382,7 +382,11 @@ endif
+ 
+ cdata.set10('ARCH_PPC64LE', host_machine.cpu() == 'ppc64le')
+ 
+-if cc.symbols_have_underscore_prefix()
++# meson's cc.symbols_have_underscore_prefix() is unfortunately unrelieably
++# when additional flags like '-fprofile-instr-generate' are passed via CFLAGS
++# see following meson issue https://github.com/mesonbuild/meson/issues/5482
++if (host_machine.system() == 'darwin' or
++   (host_machine.system() == 'windows' and host_machine.cpu_family() == 'x86'))
+     cdata.set10('PREFIX', true)
+     cdata_asm.set10('PREFIX', true)
+ endif
diff --git a/ext/dav1d.cmd b/ext/dav1d_oss_fuzz.sh
similarity index 89%
copy from ext/dav1d.cmd
copy to ext/dav1d_oss_fuzz.sh
index 958a2f9..ee96e52 100755
--- a/ext/dav1d.cmd
+++ b/ext/dav1d_oss_fuzz.sh
@@ -8,9 +8,10 @@
 : # If you're running this on Windows, be sure you've already run this (from your VC2019 install dir):
 : #     "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvars64.bat"
 
-git clone -b 0.9.0 --depth 1 https://code.videolan.org/videolan/dav1d.git
+git clone -b 0.9.2 --depth 1 https://code.videolan.org/videolan/dav1d.git
 
 cd dav1d
+patch -p1 < ../dav1d_oss_fuzz.patch
 mkdir build
 cd build
 
diff --git a/include/avif/avif.h b/include/avif/avif.h
index ce3681b..47728b9 100644
--- a/include/avif/avif.h
+++ b/include/avif/avif.h
@@ -57,7 +57,7 @@ extern "C" {
 // to leverage in-development code without breaking their stable builds.
 #define AVIF_VERSION_MAJOR 0
 #define AVIF_VERSION_MINOR 9
-#define AVIF_VERSION_PATCH 2
+#define AVIF_VERSION_PATCH 3
 #define AVIF_VERSION_DEVEL 0
 #define AVIF_VERSION \
     ((AVIF_VERSION_MAJOR * 1000000) + (AVIF_VERSION_MINOR * 10000) + (AVIF_VERSION_PATCH * 100) + AVIF_VERSION_DEVEL)
@@ -68,6 +68,10 @@ typedef int avifBool;
 
 #define AVIF_DIAGNOSTICS_ERROR_BUFFER_SIZE 256
 
+// A reasonable default for maximum image size to avoid out-of-memory errors or integer overflow in
+// (32-bit) int or unsigned int arithmetic operations.
+#define AVIF_DEFAULT_IMAGE_SIZE_LIMIT (16384 * 16384)
+
 // a 12 hour AVIF image sequence, running at 60 fps (a basic sanity check as this is quite ridiculous)
 #define AVIF_DEFAULT_IMAGE_COUNT_LIMIT (12 * 3600 * 60)
 
@@ -708,7 +712,7 @@ typedef enum avifStrictFlag
     // Disables all strict checks.
     AVIF_STRICT_DISABLED = 0,
 
-    // Allow the PixelInformationProperty ('pixi') to be missing in AV1 image items. libheif v1.11.0
+    // Requires the PixelInformationProperty ('pixi') be present in AV1 image items. libheif v1.11.0
     // or older does not add the 'pixi' item property to AV1 image items. If you need to decode AVIF
     // images encoded by libheif v1.11.0 or older, be sure to disable this bit. (This issue has been
     // corrected in libheif v1.12.0.)
@@ -719,8 +723,16 @@ typedef enum avifStrictFlag
     // function returns AVIF_FALSE and this strict flag is set, the decode will fail.
     AVIF_STRICT_CLAP_VALID = (1 << 1),
 
+    // Requires the ImageSpatialExtentsProperty ('ispe') be present in alpha auxiliary image items.
+    // avif-serialize 0.7.3 or older does not add the 'ispe' item property to alpha auxiliary image
+    // items. If you need to decode AVIF images encoded by the cavif encoder with avif-serialize
+    // 0.7.3 or older, be sure to disable this bit. (This issue has been corrected in avif-serialize
+    // 0.7.4.) See https://github.com/kornelski/avif-serialize/issues/3 and
+    // https://crbug.com/1246678.
+    AVIF_STRICT_ALPHA_ISPE_REQUIRED = (1 << 2),
+
     // Maximum strictness; enables all bits above. This is avifDecoder's default.
-    AVIF_STRICT_ENABLED = AVIF_STRICT_PIXI_REQUIRED | AVIF_STRICT_CLAP_VALID
+    AVIF_STRICT_ENABLED = AVIF_STRICT_PIXI_REQUIRED | AVIF_STRICT_CLAP_VALID | AVIF_STRICT_ALPHA_ISPE_REQUIRED
 } avifStrictFlag;
 typedef uint32_t avifStrictFlags;
 
@@ -735,7 +747,9 @@ struct avifDecoderData;
 
 typedef enum avifDecoderSource
 {
-    // If a moov box is present in the .avif(s), use the tracks in it, otherwise decode the primary item.
+    // Honor the major brand signaled in the beginning of the file to pick between an AVIF sequence
+    // ('avis', tracks-based) or a single image ('avif', item-based). If the major brand is neither
+    // of these, prefer the AVIF sequence ('avis', tracks-based), if present.
     AVIF_DECODER_SOURCE_AUTO = 0,
 
     // Use the primary item and the aux (alpha) item in the avif(s).
@@ -760,8 +774,31 @@ typedef struct avifImageTiming
     uint64_t durationInTimescales; // duration in "timescales"
 } avifImageTiming;
 
+typedef enum avifProgressiveState
+{
+    // The current AVIF/Source does not offer a progressive image. This will always be the state
+    // for an image sequence.
+    AVIF_PROGRESSIVE_STATE_UNAVAILABLE = 0,
+
+    // The current AVIF/Source offers a progressive image, but avifDecoder.allowProgressive is not
+    // enabled, so it will behave as if the image was not progressive and will simply decode the
+    // best version of this item.
+    AVIF_PROGRESSIVE_STATE_AVAILABLE,
+
+    // The current AVIF/Source offers a progressive image, and avifDecoder.allowProgressive is true.
+    // In this state, avifDecoder.imageCount will be the count of all of the available progressive
+    // layers, and any specific layer can be decoded using avifDecoderNthImage() as if it was an
+    // image sequence, or simply using repeated calls to avifDecoderNextImage() to decode better and
+    // better versions of this image.
+    AVIF_PROGRESSIVE_STATE_ACTIVE
+} avifProgressiveState;
+AVIF_API const char * avifProgressiveStateToString(avifProgressiveState progressiveState);
+
 typedef struct avifDecoder
 {
+    // --------------------------------------------------------------------------------------------
+    // Inputs
+
     // Defaults to AVIF_CODEC_CHOICE_AUTO: Preference determined by order in availableCodecs table (avif.c)
     avifCodecChoice codecChoice;
 
@@ -772,6 +809,39 @@ typedef struct avifDecoder
     // Set this via avifDecoderSetSource().
     avifDecoderSource requestedSource;
 
+    // If this is true and a progressive AVIF is decoded, avifDecoder will behave as if the AVIF is
+    // an image sequence, in that it will set imageCount to the number of progressive frames
+    // available, and avifDecoderNextImage()/avifDecoderNthImage() will allow for specific layers
+    // of a progressive image to be decoded. To distinguish between a progressive AVIF and an AVIF
+    // image sequence, inspect avifDecoder.progressiveState.
+    avifBool allowProgressive;
+
+    // Enable any of these to avoid reading and surfacing specific data to the decoded avifImage.
+    // These can be useful if your avifIO implementation heavily uses AVIF_RESULT_WAITING_ON_IO for
+    // streaming data, as some of these payloads are (unfortunately) packed at the end of the file,
+    // which will cause avifDecoderParse() to return AVIF_RESULT_WAITING_ON_IO until it finds them.
+    // If you don't actually leverage this data, it is best to ignore it here.
+    avifBool ignoreExif;
+    avifBool ignoreXMP;
+
+    // This represents the maximum size of a image (in pixel count) that libavif and the underlying
+    // AV1 decoder should attempt to decode. It defaults to AVIF_DEFAULT_IMAGE_SIZE_LIMIT, and can be
+    // set to a smaller value. The value 0 is reserved.
+    // Note: Only some underlying AV1 codecs support a configurable size limit (such as dav1d).
+    uint32_t imageSizeLimit;
+
+    // This provides an upper bound on how many images the decoder is willing to attempt to decode,
+    // to provide a bit of protection from malicious or malformed AVIFs citing millions upon
+    // millions of frames, only to be invalid later. The default is AVIF_DEFAULT_IMAGE_COUNT_LIMIT
+    // (see comment above), and setting this to 0 disables the limit.
+    uint32_t imageCountLimit;
+
+    // Strict flags. Defaults to AVIF_STRICT_ENABLED. See avifStrictFlag definitions above.
+    avifStrictFlags strictFlags;
+
+    // --------------------------------------------------------------------------------------------
+    // Outputs
+
     // All decoded image data; owned by the decoder. All information in this image is incrementally
     // added and updated as avifDecoder*() functions are called. After a successful call to
     // avifDecoderParse(), all values in decoder->image (other than the planes/rowBytes themselves)
@@ -788,44 +858,31 @@ typedef struct avifDecoder
     avifImage * image;
 
     // Counts and timing for the current image in an image sequence. Uninteresting for single image files.
-    int imageIndex;                // 0-based
-    int imageCount;                // Always 1 for non-sequences
-    avifImageTiming imageTiming;   //
-    uint64_t timescale;            // timescale of the media (Hz)
-    double duration;               // in seconds (durationInTimescales / timescale)
-    uint64_t durationInTimescales; // duration in "timescales"
+    int imageIndex;                        // 0-based
+    int imageCount;                        // Always 1 for non-progressive, non-sequence AVIFs.
+    avifProgressiveState progressiveState; // See avifProgressiveState declaration
+    avifImageTiming imageTiming;           //
+    uint64_t timescale;                    // timescale of the media (Hz)
+    double duration;                       // in seconds (durationInTimescales / timescale)
+    uint64_t durationInTimescales;         // duration in "timescales"
 
     // This is true when avifDecoderParse() detects an alpha plane. Use this to find out if alpha is
     // present after a successful call to avifDecoderParse(), but prior to any call to
     // avifDecoderNextImage() or avifDecoderNthImage(), as decoder->image->alphaPlane won't exist yet.
     avifBool alphaPresent;
 
-    // Enable any of these to avoid reading and surfacing specific data to the decoded avifImage.
-    // These can be useful if your avifIO implementation heavily uses AVIF_RESULT_WAITING_ON_IO for
-    // streaming data, as some of these payloads are (unfortunately) packed at the end of the file,
-    // which will cause avifDecoderParse() to return AVIF_RESULT_WAITING_ON_IO until it finds them.
-    // If you don't actually leverage this data, it is best to ignore it here.
-    avifBool ignoreExif;
-    avifBool ignoreXMP;
-
-    // This provides an upper bound on how many images the decoder is willing to attempt to decode,
-    // to provide a bit of protection from malicious or malformed AVIFs citing millions upon
-    // millions of frames, only to be invalid later. The default is AVIF_DEFAULT_IMAGE_COUNT_LIMIT
-    // (see comment above), and setting this to 0 disables the limit.
-    uint32_t imageCountLimit;
-
-    // Strict flags. Defaults to AVIF_STRICT_ENABLED. See avifStrictFlag definitions above.
-    avifStrictFlags strictFlags;
-
     // stats from the most recent read, possibly 0s if reading an image sequence
     avifIOStats ioStats;
 
-    // Use one of the avifDecoderSetIO*() functions to set this
-    avifIO * io;
-
     // Additional diagnostics (such as detailed error state)
     avifDiagnostics diag;
 
+    // --------------------------------------------------------------------------------------------
+    // Internals
+
+    // Use one of the avifDecoderSetIO*() functions to set this
+    avifIO * io;
+
     // Internals used by the decoder
     struct avifDecoderData * data;
 } avifDecoder;
diff --git a/include/avif/internal.h b/include/avif/internal.h
index 0f58486..cf3f231 100644
--- a/include/avif/internal.h
+++ b/include/avif/internal.h
@@ -155,26 +155,38 @@ avifResult avifImageYUVToRGBLibYUV(const avifImage * image, avifRGBImage * rgb);
 avifResult avifRGBImagePremultiplyAlphaLibYUV(avifRGBImage * rgb);
 avifResult avifRGBImageUnpremultiplyAlphaLibYUV(avifRGBImage * rgb);
 
+// ---------------------------------------------------------------------------
+// Scaling
+
+// This scales the YUV/A planes in-place.
+avifBool avifImageScale(avifImage * image, uint32_t dstWidth, uint32_t dstHeight, uint32_t imageSizeLimit, avifDiagnostics * diag);
+
 // ---------------------------------------------------------------------------
 // avifCodecDecodeInput
 
+// Legal spatial_id values are [0,1,2,3], so this serves as a sentinel value for "do not filter by spatial_id"
+#define AVIF_SPATIAL_ID_UNSET 0xff
+
 typedef struct avifDecodeSample
 {
     avifROData data;
     avifBool ownsData;
     avifBool partialData; // if true, data exists but doesn't have all of the sample in it
 
-    uint32_t itemID; // if non-zero, data comes from a mergedExtents buffer in an avifDecoderItem, not a file offset
-    uint64_t offset; // used only when itemID is zero, ignored and set to 0 when itemID is non-zero
-    size_t size;
-    avifBool sync; // is sync sample (keyframe)
+    uint32_t itemID;   // if non-zero, data comes from a mergedExtents buffer in an avifDecoderItem, not a file offset
+    uint64_t offset;   // additional offset into data. Can be used to offset into an itemID's payload as well.
+    size_t size;       //
+    uint8_t spatialID; // If set to a value other than AVIF_SPATIAL_ID_UNSET, output frames from this sample should be
+                       // skipped until the output frame's spatial_id matches this ID.
+    avifBool sync;     // is sync sample (keyframe)
 } avifDecodeSample;
 AVIF_ARRAY_DECLARE(avifDecodeSampleArray, avifDecodeSample, sample);
 
 typedef struct avifCodecDecodeInput
 {
     avifDecodeSampleArray samples;
-    avifBool alpha; // if true, this is decoding an alpha plane
+    avifBool allLayers; // if true, the underlying codec must decode all layers, not just the best layer
+    avifBool alpha;     // if true, this is decoding an alpha plane
 } avifCodecDecodeInput;
 
 avifCodecDecodeInput * avifCodecDecodeInputCreate(void);
@@ -244,6 +256,9 @@ typedef struct avifCodec
     struct avifCodecInternal * internal;  // up to each codec to use how it wants
                                           //
     avifDiagnostics * diag;               // Shallow copy; owned by avifEncoder or avifDecoder
+                                          //
+    uint8_t operatingPoint;               // Operating point, defaults to 0.
+    avifBool allLayers;                   // if true, the underlying codec must decode all layers, not just the best layer
 
     avifCodecGetNextImageFunc getNextImage;
     avifCodecEncodeImageFunc encodeImage;
@@ -352,10 +367,6 @@ typedef struct avifSequenceHeader
 } avifSequenceHeader;
 avifBool avifSequenceHeaderParse(avifSequenceHeader * header, const avifROData * sample);
 
-// A maximum image size to avoid out-of-memory errors or integer overflow in
-// (32-bit) int or unsigned int arithmetic operations.
-#define AVIF_MAX_IMAGE_SIZE (16384 * 16384)
-
 #ifdef __cplusplus
 } // extern "C"
 #endif
diff --git a/libavif.pc.cmake b/libavif.pc.cmake
index 4ef2c8a..f87d289 100644
--- a/libavif.pc.cmake
+++ b/libavif.pc.cmake
@@ -1,7 +1,7 @@
 prefix=@CMAKE_INSTALL_PREFIX@
-exec_prefix=${prefix}/bin
-libdir=${prefix}/@CMAKE_INSTALL_LIBDIR@
-includedir=${prefix}/include
+exec_prefix=@CMAKE_INSTALL_PREFIX@
+libdir=@CMAKE_INSTALL_FULL_LIBDIR@
+includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@
 
 Name: @PROJECT_NAME@
 Description: Library for encoding and decoding .avif files
diff --git a/src/avif.c b/src/avif.c
index 0fa17aa..64b961f 100644
--- a/src/avif.c
+++ b/src/avif.c
@@ -101,6 +101,20 @@ const char * avifResultToString(avifResult result)
     return "Unknown Error";
 }
 
+const char * avifProgressiveStateToString(avifProgressiveState progressiveState)
+{
+    // clang-format off
+    switch (progressiveState) {
+        case AVIF_PROGRESSIVE_STATE_UNAVAILABLE: return "Unavailable";
+        case AVIF_PROGRESSIVE_STATE_AVAILABLE:   return "Available";
+        case AVIF_PROGRESSIVE_STATE_ACTIVE:      return "Active";
+        default:
+            break;
+    }
+    // clang-format on
+    return "Unknown";
+}
+
 // This function assumes nothing in this struct needs to be freed; use avifImageClear() externally
 static void avifImageSetDefaults(avifImage * image)
 {
diff --git a/src/codec_aom.c b/src/codec_aom.c
index 872ac01..e68f404 100644
--- a/src/codec_aom.c
+++ b/src/codec_aom.c
@@ -41,6 +41,13 @@
 #if AOM_CODEC_ABI_VERSION >= (6 + AOM_IMAGE_ABI_VERSION)
 #define HAVE_AOM_CODEC_SET_OPTION 1
 #endif
+
+// Speeds 7-9 were added to all intra mode in https://aomedia-review.googlesource.com/c/aom/+/140624.
+#if defined(AOM_EXT_PART_ABI_VERSION)
+#if AOM_ENCODER_ABI_VERSION >= (10 + AOM_CODEC_ABI_VERSION + AOM_EXT_PART_ABI_VERSION)
+#define ALL_INTRA_HAS_SPEEDS_7_TO_9 1
+#endif
+#endif
 #endif
 
 struct avifCodecInternal
@@ -104,10 +111,10 @@ static avifBool aomCodecGetNextImage(struct avifCodec * codec,
         }
         codec->internal->decoderInitialized = AVIF_TRUE;
 
-        // Ensure that we only get the "highest spatial layer" as a single frame
-        // for each input sample, instead of getting each spatial layer as its own
-        // frame one at a time ("all layers").
-        if (aom_codec_control(&codec->internal->decoder, AV1D_SET_OUTPUT_ALL_LAYERS, 0)) {
+        if (aom_codec_control(&codec->internal->decoder, AV1D_SET_OUTPUT_ALL_LAYERS, codec->allLayers)) {
+            return AVIF_FALSE;
+        }
+        if (aom_codec_control(&codec->internal->decoder, AV1D_SET_OPERATING_POINT, codec->operatingPoint)) {
             return AVIF_FALSE;
         }
 
@@ -115,16 +122,27 @@ static avifBool aomCodecGetNextImage(struct avifCodec * codec,
     }
 
     aom_image_t * nextFrame = NULL;
+    uint8_t spatialID = AVIF_SPATIAL_ID_UNSET;
     for (;;) {
         nextFrame = aom_codec_get_frame(&codec->internal->decoder, &codec->internal->iter);
         if (nextFrame) {
-            // Got an image!
-            break;
+            if (spatialID != AVIF_SPATIAL_ID_UNSET) {
+                // This requires libaom v3.1.2 or later, which has the fix for
+                // https://crbug.com/aomedia/2993.
+                if (spatialID == nextFrame->spatial_id) {
+                    // Found the correct spatial_id.
+                    break;
+                }
+            } else {
+                // Got an image!
+                break;
+            }
         } else if (sample) {
             codec->internal->iter = NULL;
             if (aom_codec_decode(&codec->internal->decoder, sample->data.data, sample->data.size, NULL)) {
                 return AVIF_FALSE;
             }
+            spatialID = sample->spatialID;
             sample = NULL;
         } else {
             break;
@@ -348,6 +366,7 @@ static avifBool avifProcessAOMOptionsPreInit(avifCodec * codec, avifBool alpha,
         int val;
         if (avifKeyEqualsName(entry->key, "end-usage", alpha)) { // Rate control mode
             if (!aomOptionParseEnum(entry->value, endUsageEnum, &val)) {
+                avifDiagnosticsPrintf(codec->diag, "Invalid value for end-usage: %s", entry->value);
                 return AVIF_FALSE;
             }
             cfg->rc_end_usage = val;
@@ -433,6 +452,12 @@ static avifBool avifProcessAOMOptionsPostInit(avifCodec * codec, avifBool alpha)
             key += shortPrefixLen;
         }
         if (aom_codec_set_option(&codec->internal->encoder, key, entry->value) != AOM_CODEC_OK) {
+            avifDiagnosticsPrintf(codec->diag,
+                                  "aom_codec_set_option(\"%s\", \"%s\") failed: %s: %s",
+                                  key,
+                                  entry->value,
+                                  aom_codec_error(&codec->internal->encoder),
+                                  aom_codec_error_detail(&codec->internal->encoder));
             return AVIF_FALSE;
         }
 #else  // !defined(HAVE_AOM_CODEC_SET_OPTION)
@@ -498,10 +523,10 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
         // Speed  4: GoodQuality CpuUsed 4
         // Speed  5: GoodQuality CpuUsed 5
         // Speed  6: GoodQuality CpuUsed 6
-        // Speed  7: GoodQuality CpuUsed 6
-        // Speed  8: RealTime    CpuUsed 6
-        // Speed  9: RealTime    CpuUsed 7
-        // Speed 10: RealTime    CpuUsed 8
+        // Speed  7: RealTime    CpuUsed 7
+        // Speed  8: RealTime    CpuUsed 8
+        // Speed  9: RealTime    CpuUsed 9
+        // Speed 10: RealTime    CpuUsed 9
         unsigned int aomUsage = AOM_USAGE_GOOD_QUALITY;
         // Use the new AOM_USAGE_ALL_INTRA (added in https://crbug.com/aomedia/2959) for still
         // image encoding if it is available.
@@ -512,11 +537,15 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
 #endif
         int aomCpuUsed = -1;
         if (encoder->speed != AVIF_SPEED_DEFAULT) {
-            if (encoder->speed < 8) {
-                aomCpuUsed = AVIF_CLAMP(encoder->speed, 0, 6);
-            } else {
+            aomCpuUsed = AVIF_CLAMP(encoder->speed, 0, 9);
+            if (aomCpuUsed >= 7) {
+#if defined(AOM_USAGE_ALL_INTRA) && defined(ALL_INTRA_HAS_SPEEDS_7_TO_9)
+                if (!(addImageFlags & AVIF_ADD_IMAGE_FLAG_SINGLE)) {
+                    aomUsage = AOM_USAGE_REALTIME;
+                }
+#else
                 aomUsage = AOM_USAGE_REALTIME;
-                aomCpuUsed = AVIF_CLAMP(encoder->speed - 2, 6, 8);
+#endif
             }
         }
 
@@ -524,7 +553,7 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
         static const int aomVersion_2_0_0 = (2 << 16);
         const int aomVersion = aom_codec_version();
         if ((aomVersion < aomVersion_2_0_0) && (image->depth > 8)) {
-            // Due to a known issue with libavif v1.0.0-errata1-avif, 10bpc and
+            // Due to a known issue with libaom v1.0.0-errata1-avif, 10bpc and
             // 12bpc image encodes will call the wrong variant of
             // aom_subtract_block when cpu-used is 7 or 8, and crash. Until we get
             // a new tagged release from libaom with the fix and can verify we're
@@ -700,55 +729,114 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
 #endif
     }
 
-    int yShift = codec->internal->formatInfo.chromaShiftY;
-    uint32_t uvHeight = (image->height + yShift) >> yShift;
-    aom_image_t * aomImage = aom_img_alloc(NULL, codec->internal->aomFormat, image->width, image->height, 16);
+    aom_image_t aomImage;
+    // We prefer to simply set the aomImage.planes[] pointers to the plane buffers in 'image'. When
+    // doing this, we set aomImage.w equal to aomImage.d_w and aomImage.h equal to aomImage.d_h and
+    // do not "align" aomImage.w and aomImage.h. Unfortunately this exposes a bug in libaom
+    // (https://crbug.com/aomedia/3113) if chroma is subsampled and image->width or image->height is
+    // equal to 1. To work around this libaom bug, we allocate the aomImage.planes[] buffers and
+    // copy the image YUV data if image->width or image->height is equal to 1. This bug has been
+    // fixed in libaom v3.1.3.
+    //
+    // Note: The exact condition for the bug is
+    //   ((image->width == 1) && (chroma is subsampled horizontally)) ||
+    //   ((image->height == 1) && (chroma is subsampled vertically))
+    // Since an image width or height of 1 is uncommon in practice, we test an inexact but simpler
+    // condition.
+    avifBool aomImageAllocated = (image->width == 1) || (image->height == 1);
+    if (aomImageAllocated) {
+        aom_img_alloc(&aomImage, codec->internal->aomFormat, image->width, image->height, 16);
+    } else {
+        memset(&aomImage, 0, sizeof(aomImage));
+        aomImage.fmt = codec->internal->aomFormat;
+        aomImage.bit_depth = (image->depth > 8) ? 16 : 8;
+        aomImage.w = image->width;
+        aomImage.h = image->height;
+        aomImage.d_w = image->width;
+        aomImage.d_h = image->height;
+        // Get sample size for this format.
+        unsigned int bps;
+        if (codec->internal->aomFormat == AOM_IMG_FMT_I420) {
+            bps = 12;
+        } else if (codec->internal->aomFormat == AOM_IMG_FMT_I422) {
+            bps = 16;
+        } else if (codec->internal->aomFormat == AOM_IMG_FMT_I444) {
+            bps = 24;
+        } else if (codec->internal->aomFormat == AOM_IMG_FMT_I42016) {
+            bps = 24;
+        } else if (codec->internal->aomFormat == AOM_IMG_FMT_I42216) {
+            bps = 32;
+        } else if (codec->internal->aomFormat == AOM_IMG_FMT_I44416) {
+            bps = 48;
+        } else {
+            bps = 16;
+        }
+        aomImage.bps = bps;
+        aomImage.x_chroma_shift = alpha ? 1 : codec->internal->formatInfo.chromaShiftX;
+        aomImage.y_chroma_shift = alpha ? 1 : codec->internal->formatInfo.chromaShiftY;
+    }
+
     avifBool monochromeRequested = AVIF_FALSE;
 
     if (alpha) {
-        aomImage->range = (image->alphaRange == AVIF_RANGE_FULL) ? AOM_CR_FULL_RANGE : AOM_CR_STUDIO_RANGE;
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_RANGE, aomImage->range);
+        aomImage.range = (image->alphaRange == AVIF_RANGE_FULL) ? AOM_CR_FULL_RANGE : AOM_CR_STUDIO_RANGE;
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_RANGE, aomImage.range);
         monochromeRequested = AVIF_TRUE;
-        for (uint32_t j = 0; j < image->height; ++j) {
-            uint8_t * srcAlphaRow = &image->alphaPlane[j * image->alphaRowBytes];
-            uint8_t * dstAlphaRow = &aomImage->planes[0][j * aomImage->stride[0]];
-            memcpy(dstAlphaRow, srcAlphaRow, image->alphaRowBytes);
+        if (aomImageAllocated) {
+            for (uint32_t j = 0; j < image->height; ++j) {
+                uint8_t * srcAlphaRow = &image->alphaPlane[j * image->alphaRowBytes];
+                uint8_t * dstAlphaRow = &aomImage.planes[0][j * aomImage.stride[0]];
+                memcpy(dstAlphaRow, srcAlphaRow, image->alphaRowBytes);
+            }
+        } else {
+            aomImage.planes[0] = image->alphaPlane;
+            aomImage.stride[0] = image->alphaRowBytes;
         }
 
         // Ignore UV planes when monochrome
     } else {
-        aomImage->range = (image->yuvRange == AVIF_RANGE_FULL) ? AOM_CR_FULL_RANGE : AOM_CR_STUDIO_RANGE;
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_RANGE, aomImage->range);
+        aomImage.range = (image->yuvRange == AVIF_RANGE_FULL) ? AOM_CR_FULL_RANGE : AOM_CR_STUDIO_RANGE;
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_RANGE, aomImage.range);
         int yuvPlaneCount = 3;
         if (image->yuvFormat == AVIF_PIXEL_FORMAT_YUV400) {
             yuvPlaneCount = 1; // Ignore UV planes when monochrome
             monochromeRequested = AVIF_TRUE;
         }
-        int xShift = codec->internal->formatInfo.chromaShiftX;
-        uint32_t uvWidth = (image->width + xShift) >> xShift;
-        uint32_t bytesPerPixel = (image->depth > 8) ? 2 : 1;
-        for (int yuvPlane = 0; yuvPlane < yuvPlaneCount; ++yuvPlane) {
-            uint32_t planeWidth = (yuvPlane == AVIF_CHAN_Y) ? image->width : uvWidth;
-            uint32_t planeHeight = (yuvPlane == AVIF_CHAN_Y) ? image->height : uvHeight;
-            uint32_t bytesPerRow = bytesPerPixel * planeWidth;
-
-            for (uint32_t j = 0; j < planeHeight; ++j) {
-                uint8_t * srcRow = &image->yuvPlanes[yuvPlane][j * image->yuvRowBytes[yuvPlane]];
-                uint8_t * dstRow = &aomImage->planes[yuvPlane][j * aomImage->stride[yuvPlane]];
-                memcpy(dstRow, srcRow, bytesPerRow);
+        if (aomImageAllocated) {
+            int xShift = codec->internal->formatInfo.chromaShiftX;
+            uint32_t uvWidth = (image->width + xShift) >> xShift;
+            int yShift = codec->internal->formatInfo.chromaShiftY;
+            uint32_t uvHeight = (image->height + yShift) >> yShift;
+            uint32_t bytesPerPixel = (image->depth > 8) ? 2 : 1;
+            for (int yuvPlane = 0; yuvPlane < yuvPlaneCount; ++yuvPlane) {
+                uint32_t planeWidth = (yuvPlane == AVIF_CHAN_Y) ? image->width : uvWidth;
+                uint32_t planeHeight = (yuvPlane == AVIF_CHAN_Y) ? image->height : uvHeight;
+                uint32_t bytesPerRow = bytesPerPixel * planeWidth;
+
+                for (uint32_t j = 0; j < planeHeight; ++j) {
+                    uint8_t * srcRow = &image->yuvPlanes[yuvPlane][j * image->yuvRowBytes[yuvPlane]];
+                    uint8_t * dstRow = &aomImage.planes[yuvPlane][j * aomImage.stride[yuvPlane]];
+                    memcpy(dstRow, srcRow, bytesPerRow);
+                }
+            }
+        } else {
+            for (int yuvPlane = 0; yuvPlane < yuvPlaneCount; ++yuvPlane) {
+                aomImage.planes[yuvPlane] = image->yuvPlanes[yuvPlane];
+                aomImage.stride[yuvPlane] = image->yuvRowBytes[yuvPlane];
             }
         }
 
-        aomImage->cp = (aom_color_primaries_t)image->colorPrimaries;
-        aomImage->tc = (aom_transfer_characteristics_t)image->transferCharacteristics;
-        aomImage->mc = (aom_matrix_coefficients_t)image->matrixCoefficients;
-        aomImage->csp = (aom_chroma_sample_position_t)image->yuvChromaSamplePosition;
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_PRIMARIES, aomImage->cp);
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_TRANSFER_CHARACTERISTICS, aomImage->tc);
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_MATRIX_COEFFICIENTS, aomImage->mc);
-        aom_codec_control(&codec->internal->encoder, AV1E_SET_CHROMA_SAMPLE_POSITION, aomImage->csp);
+        aomImage.cp = (aom_color_primaries_t)image->colorPrimaries;
+        aomImage.tc = (aom_transfer_characteristics_t)image->transferCharacteristics;
+        aomImage.mc = (aom_matrix_coefficients_t)image->matrixCoefficients;
+        aomImage.csp = (aom_chroma_sample_position_t)image->yuvChromaSamplePosition;
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_COLOR_PRIMARIES, aomImage.cp);
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_TRANSFER_CHARACTERISTICS, aomImage.tc);
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_MATRIX_COEFFICIENTS, aomImage.mc);
+        aom_codec_control(&codec->internal->encoder, AV1E_SET_CHROMA_SAMPLE_POSITION, aomImage.csp);
     }
 
+    unsigned char * monoUVPlane = NULL;
     if (monochromeRequested && !codec->internal->monochromeEnabled) {
         // The user requested monochrome (via alpha or YUV400) but libaom cannot currently support
         // monochrome (see chroma_check comment above). Manually set UV planes to 0.5.
@@ -757,28 +845,45 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
         uint32_t monoUVWidth = (image->width + 1) >> 1;
         uint32_t monoUVHeight = (image->height + 1) >> 1;
 
-        for (int yuvPlane = 1; yuvPlane < 3; ++yuvPlane) {
-            if (image->depth > 8) {
-                const uint16_t half = 1 << (image->depth - 1);
-                for (uint32_t j = 0; j < monoUVHeight; ++j) {
-                    uint16_t * dstRow = (uint16_t *)&aomImage->planes[yuvPlane][j * aomImage->stride[yuvPlane]];
-                    for (uint32_t i = 0; i < monoUVWidth; ++i) {
-                        dstRow[i] = half;
-                    }
+        // Allocate the U plane if necessary.
+        if (!aomImageAllocated) {
+            uint32_t channelSize = avifImageUsesU16(image) ? 2 : 1;
+            uint32_t monoUVRowBytes = channelSize * monoUVWidth;
+            size_t monoUVSize = (size_t)monoUVHeight * monoUVRowBytes;
+
+            monoUVPlane = avifAlloc(monoUVSize);
+            aomImage.planes[1] = monoUVPlane;
+            aomImage.stride[1] = monoUVRowBytes;
+        }
+        // Set the U plane to 0.5.
+        if (image->depth > 8) {
+            const uint16_t half = 1 << (image->depth - 1);
+            for (uint32_t j = 0; j < monoUVHeight; ++j) {
+                uint16_t * dstRow = (uint16_t *)&aomImage.planes[1][(size_t)j * aomImage.stride[1]];
+                for (uint32_t i = 0; i < monoUVWidth; ++i) {
+                    dstRow[i] = half;
                 }
-            } else {
-                const uint8_t half = 128;
-                size_t planeSize = (size_t)monoUVHeight * aomImage->stride[yuvPlane];
-                memset(aomImage->planes[yuvPlane], half, planeSize);
             }
+        } else {
+            const uint8_t half = 128;
+            size_t planeSize = (size_t)monoUVHeight * aomImage.stride[1];
+            memset(aomImage.planes[1], half, planeSize);
         }
+        // Make the V plane the same as the U plane.
+        aomImage.planes[2] = aomImage.planes[1];
+        aomImage.stride[2] = aomImage.stride[1];
     }
 
     aom_enc_frame_flags_t encodeFlags = 0;
     if (addImageFlags & AVIF_ADD_IMAGE_FLAG_FORCE_KEYFRAME) {
         encodeFlags |= AOM_EFLAG_FORCE_KF;
     }
-    if (aom_codec_encode(&codec->internal->encoder, aomImage, 0, 1, encodeFlags) != AOM_CODEC_OK) {
+    aom_codec_err_t encodeErr = aom_codec_encode(&codec->internal->encoder, &aomImage, 0, 1, encodeFlags);
+    avifFree(monoUVPlane);
+    if (aomImageAllocated) {
+        aom_img_free(&aomImage);
+    }
+    if (encodeErr != AOM_CODEC_OK) {
         avifDiagnosticsPrintf(codec->diag,
                               "aom_codec_encode() failed: %s: %s",
                               aom_codec_error(&codec->internal->encoder),
@@ -797,8 +902,6 @@ static avifResult aomCodecEncodeImage(avifCodec * codec,
         }
     }
 
-    aom_img_free(aomImage);
-
     if (addImageFlags & AVIF_ADD_IMAGE_FLAG_SINGLE) {
         // Flush and clean up encoder resources early to save on overhead when encoding alpha or grid images
 
diff --git a/src/codec_dav1d.c b/src/codec_dav1d.c
index 91f1f4d..094bee3 100644
--- a/src/codec_dav1d.c
+++ b/src/codec_dav1d.c
@@ -57,8 +57,21 @@ static avifBool dav1dCodecGetNextImage(struct avifCodec * codec,
 {
     if (codec->internal->dav1dContext == NULL) {
         // Give all available threads to decode a single frame as fast as possible
+#if DAV1D_API_VERSION_MAJOR >= 6
+        codec->internal->dav1dSettings.max_frame_delay = 1;
+        codec->internal->dav1dSettings.n_threads = AVIF_CLAMP(decoder->maxThreads, 1, DAV1D_MAX_THREADS);
+#else
         codec->internal->dav1dSettings.n_frame_threads = 1;
         codec->internal->dav1dSettings.n_tile_threads = AVIF_CLAMP(decoder->maxThreads, 1, DAV1D_MAX_TILE_THREADS);
+#endif  // DAV1D_API_VERSION_MAJOR >= 6
+        // Set a maximum frame size limit to avoid OOM'ing fuzzers. In 32-bit builds, if
+        // frame_size_limit > 8192 * 8192, dav1d reduces frame_size_limit to 8192 * 8192 and logs
+        // a message, so we set frame_size_limit to at most 8192 * 8192 to avoid the dav1d_log
+        // message.
+        codec->internal->dav1dSettings.frame_size_limit = (sizeof(size_t) < 8) ? AVIF_MIN(decoder->imageSizeLimit, 8192 * 8192)
+                                                                               : decoder->imageSizeLimit;
+        codec->internal->dav1dSettings.operating_point = codec->operatingPoint;
+        codec->internal->dav1dSettings.all_layers = codec->allLayers;
 
         if (dav1d_open(&codec->internal->dav1dContext, &codec->internal->dav1dSettings) != 0) {
             return AVIF_FALSE;
@@ -98,8 +111,13 @@ static avifBool dav1dCodecGetNextImage(struct avifCodec * codec,
             return AVIF_FALSE;
         } else {
             // Got a picture!
-            gotPicture = AVIF_TRUE;
-            break;
+            if ((sample->spatialID != AVIF_SPATIAL_ID_UNSET) && (sample->spatialID != nextFrame.frame_hdr->spatial_id)) {
+                // Layer selection: skip this unwanted layer
+                dav1d_picture_unref(&nextFrame);
+            } else {
+                gotPicture = AVIF_TRUE;
+                break;
+            }
         }
     }
     if (dav1dData.data) {
@@ -208,9 +226,6 @@ avifCodec * avifCodecCreateDav1d(void)
     memset(codec->internal, 0, sizeof(struct avifCodecInternal));
     dav1d_default_settings(&codec->internal->dav1dSettings);
 
-    // Set a maximum frame size limit to avoid OOM'ing fuzzers.
-    codec->internal->dav1dSettings.frame_size_limit = AVIF_MAX_IMAGE_SIZE;
-
     // Ensure that we only get the "highest spatial layer" as a single frame
     // for each input sample, instead of getting each spatial layer as its own
     // frame one at a time ("all layers").
diff --git a/src/codec_libgav1.c b/src/codec_libgav1.c
index 91f1dc5..de5fa15 100644
--- a/src/codec_libgav1.c
+++ b/src/codec_libgav1.c
@@ -31,6 +31,8 @@ static avifBool gav1CodecGetNextImage(struct avifCodec * codec,
 {
     if (codec->internal->gav1Decoder == NULL) {
         codec->internal->gav1Settings.threads = decoder->maxThreads;
+        codec->internal->gav1Settings.operating_point = codec->operatingPoint;
+        codec->internal->gav1Settings.output_all_layers = codec->allLayers;
 
         if (Libgav1DecoderCreate(&codec->internal->gav1Settings, &codec->internal->gav1Decoder) != kLibgav1StatusOk) {
             return AVIF_FALSE;
@@ -48,9 +50,17 @@ static avifBool gav1CodecGetNextImage(struct avifCodec * codec,
     // returned by the previous Libgav1DecoderDequeueFrame() call. Clear
     // our pointer to the previous output frame.
     codec->internal->gav1Image = NULL;
+
     const Libgav1DecoderBuffer * nextFrame = NULL;
-    if (Libgav1DecoderDequeueFrame(codec->internal->gav1Decoder, &nextFrame) != kLibgav1StatusOk) {
-        return AVIF_FALSE;
+    for (;;) {
+        if (Libgav1DecoderDequeueFrame(codec->internal->gav1Decoder, &nextFrame) != kLibgav1StatusOk) {
+            return AVIF_FALSE;
+        }
+        if (nextFrame && (sample->spatialID != AVIF_SPATIAL_ID_UNSET) && (nextFrame->spatial_id != sample->spatialID)) {
+            nextFrame = NULL;
+        } else {
+            break;
+        }
     }
     // Got an image!
 
diff --git a/src/read.c b/src/read.c
index bd4c3b3..68784fe 100644
--- a/src/read.c
+++ b/src/read.c
@@ -37,6 +37,8 @@ static const size_t xmpContentTypeSize = sizeof(xmpContentType);
 // can't be more than 4 unique tuples right now.
 #define MAX_IPMA_VERSION_AND_FLAGS_SEEN 4
 
+#define MAX_AV1_LAYER_COUNT 4
+
 // ---------------------------------------------------------------------------
 // Box data structures
 
@@ -90,6 +92,21 @@ typedef struct avifPixelInformationProperty
     uint8_t planeCount;
 } avifPixelInformationProperty;
 
+typedef struct avifOperatingPointSelectorProperty
+{
+    uint8_t opIndex;
+} avifOperatingPointSelectorProperty;
+
+typedef struct avifLayerSelectorProperty
+{
+    uint16_t layerID;
+} avifLayerSelectorProperty;
+
+typedef struct avifAV1LayeredImageIndexingProperty
+{
+    uint32_t layerSize[3];
+} avifAV1LayeredImageIndexingProperty;
+
 // ---------------------------------------------------------------------------
 // Top-level structures
 
@@ -110,6 +127,9 @@ typedef struct avifProperty
         avifImageRotation irot;
         avifImageMirror imir;
         avifPixelInformationProperty pixi;
+        avifOperatingPointSelectorProperty a1op;
+        avifLayerSelectorProperty lsel;
+        avifAV1LayeredImageIndexingProperty a1lx;
     } u;
 } avifProperty;
 AVIF_ARRAY_DECLARE(avifPropertyArray, avifProperty, prop);
@@ -134,7 +154,9 @@ typedef struct avifDecoderItem
     struct avifMeta * meta; // Unowned; A back-pointer for convenience
     uint8_t type[4];
     size_t size;
-    uint32_t idatID; // If non-zero, offset is relative to this idat box (iloc construction_method==1)
+    avifBool idatStored; // If true, offset is relative to the associated meta box's idat box (iloc construction_method==1)
+    uint32_t width;      // Set from this item's ispe property, if present
+    uint32_t height;     // Set from this item's ispe property, if present
     avifContentType contentType;
     avifPropertyArray properties;
     avifExtentArray extents;       // All extent offsets/sizes
@@ -147,18 +169,11 @@ typedef struct avifDecoderItem
     uint32_t dimgForID;            // if non-zero, this item is a derived image for Item #{dimgForID}
     uint32_t premByID;             // if non-zero, this item is premultiplied by Item #{premByID}
     avifBool hasUnsupportedEssentialProperty; // If true, this item cites a property flagged as 'essential' that libavif doesn't support (yet). Ignore the item, if so.
-    avifBool ipmaSeen; // if true, this item already received a property association
+    avifBool ipmaSeen;    // if true, this item already received a property association
+    avifBool progressive; // if true, this item has progressive layers (a1lx), but does not select a specific layer (lsel)
 } avifDecoderItem;
 AVIF_ARRAY_DECLARE(avifDecoderItemArray, avifDecoderItem, item);
 
-// idat storage
-typedef struct avifDecoderItemData
-{
-    uint32_t id;
-    avifRWData data;
-} avifDecoderItemData;
-AVIF_ARRAY_DECLARE(avifDecoderItemDataArray, avifDecoderItemData, idat);
-
 // grid storage
 typedef struct avifImageGrid
 {
@@ -361,11 +376,11 @@ static uint32_t avifGetSampleCountOfChunk(const avifSampleTableSampleToChunkArra
     return sampleCount;
 }
 
-static avifBool avifCodecDecodeInputGetSamples(avifCodecDecodeInput * decodeInput,
-                                               avifSampleTable * sampleTable,
-                                               const uint32_t imageCountLimit,
-                                               const uint64_t sizeHint,
-                                               avifDiagnostics * diag)
+static avifBool avifCodecDecodeInputFillFromSampleTable(avifCodecDecodeInput * decodeInput,
+                                                        avifSampleTable * sampleTable,
+                                                        const uint32_t imageCountLimit,
+                                                        const uint64_t sizeHint,
+                                                        avifDiagnostics * diag)
 {
     if (imageCountLimit) {
         // Verify that the we're not about to exceed the frame count limit.
@@ -417,7 +432,8 @@ static avifBool avifCodecDecodeInputGetSamples(avifCodecDecodeInput * decodeInpu
             avifDecodeSample * sample = (avifDecodeSample *)avifArrayPushPtr(&decodeInput->samples);
             sample->offset = sampleOffset;
             sample->size = sampleSize;
-            sample->sync = AVIF_FALSE; // to potentially be set to true following the outer loop
+            sample->spatialID = AVIF_SPATIAL_ID_UNSET; // Not filtering by spatial_id
+            sample->sync = AVIF_FALSE;                 // to potentially be set to true following the outer loop
 
             if (sampleSize > UINT64_MAX - sampleOffset) {
                 avifDiagnosticsPrintf(diag,
@@ -451,6 +467,119 @@ static avifBool avifCodecDecodeInputGetSamples(avifCodecDecodeInput * decodeInpu
     return AVIF_TRUE;
 }
 
+static avifBool avifCodecDecodeInputFillFromDecoderItem(avifCodecDecodeInput * decodeInput,
+                                                        avifDecoderItem * item,
+                                                        avifBool allowProgressive,
+                                                        const uint32_t imageCountLimit,
+                                                        const uint64_t sizeHint,
+                                                        avifDiagnostics * diag)
+{
+    if (sizeHint && (item->size > sizeHint)) {
+        avifDiagnosticsPrintf(diag, "Exceeded avifIO's sizeHint, possibly truncated data");
+        return AVIF_FALSE;
+    }
+
+    uint8_t layerCount = 0;
+    size_t layerSizes[4] = { 0 };
+    const avifProperty * a1lxProp = avifPropertyArrayFind(&item->properties, "a1lx");
+    if (a1lxProp) {
+        // Calculate layer count and all layer sizes from the a1lx box, and then validate
+
+        size_t remainingSize = item->size;
+        for (int i = 0; i < 3; ++i) {
+            ++layerCount;
+
+            const size_t layerSize = (size_t)a1lxProp->u.a1lx.layerSize[i];
+            if (layerSize) {
+                if (layerSize >= remainingSize) { // >= instead of > because there must be room for the last layer
+                    avifDiagnosticsPrintf(diag, "a1lx layer index [%d] does not fit in item size", i);
+                    return AVIF_FALSE;
+                }
+                layerSizes[i] = layerSize;
+                remainingSize -= layerSize;
+            } else {
+                layerSizes[i] = remainingSize;
+                remainingSize = 0;
+                break;
+            }
+        }
+        if (remainingSize > 0) {
+            assert(layerCount == 3);
+            ++layerCount;
+            layerSizes[3] = remainingSize;
+        }
+    }
+
+    const avifProperty * lselProp = avifPropertyArrayFind(&item->properties, "lsel");
+    item->progressive = (a1lxProp && !lselProp); // Progressive images offer layers via the a1lxProp, but don't specify a layer selection with lsel.
+    if (lselProp) {
+        // Layer selection. This requires that the underlying AV1 codec decodes all layers,
+        // and then only returns the requested layer as a single frame. To the user of libavif,
+        // this appears to be a single frame.
+
+        decodeInput->allLayers = AVIF_TRUE;
+
+        size_t sampleSize = 0;
+        if (layerCount > 0) {
+            // Optimization: If we're selecting a layer that doesn't require the entire image's payload (hinted via the a1lx box)
+
+            if (lselProp->u.lsel.layerID >= layerCount) {
+                avifDiagnosticsPrintf(diag,
+                                      "lsel property requests layer index [%u] which isn't present in a1lx property ([%u] layers)",
+                                      lselProp->u.lsel.layerID,
+                                      layerCount);
+                return AVIF_FALSE;
+            }
+
+            for (uint8_t i = 0; i <= lselProp->u.lsel.layerID; ++i) {
+                sampleSize += layerSizes[i];
+            }
+        } else {
+            // This layer's payload subsection is unknown, just use the whole payload
+            sampleSize = item->size;
+        }
+
+        avifDecodeSample * sample = (avifDecodeSample *)avifArrayPushPtr(&decodeInput->samples);
+        sample->itemID = item->id;
+        sample->offset = 0;
+        sample->size = sampleSize;
+        assert(lselProp->u.lsel.layerID < MAX_AV1_LAYER_COUNT);
+        sample->spatialID = (uint8_t)lselProp->u.lsel.layerID;
+        sample->sync = AVIF_TRUE;
+    } else if (allowProgressive && item->progressive) {
+        // Progressive image. Decode all layers and expose them all to the user.
+
+        if (imageCountLimit && (layerCount > imageCountLimit)) {
+            avifDiagnosticsPrintf(diag, "Exceeded avifDecoder's imageCountLimit (progressive)");
+            return AVIF_FALSE;
+        }
+
+        decodeInput->allLayers = AVIF_TRUE;
+
+        size_t offset = 0;
+        for (int i = 0; i < layerCount; ++i) {
+            avifDecodeSample * sample = (avifDecodeSample *)avifArrayPushPtr(&decodeInput->samples);
+            sample->itemID = item->id;
+            sample->offset = offset;
+            sample->size = layerSizes[i];
+            sample->spatialID = AVIF_SPATIAL_ID_UNSET;
+            sample->sync = (i == 0); // Assume all layers depend on the first layer
+
+            offset += layerSizes[i];
+        }
+    } else {
+        // Typical case: Use the entire item's payload for a single frame output
+
+        avifDecodeSample * sample = (avifDecodeSample *)avifArrayPushPtr(&decodeInput->samples);
+        sample->itemID = item->id;
+        sample->offset = 0;
+        sample->size = item->size;
+        sample->spatialID = AVIF_SPATIAL_ID_UNSET;
+        sample->sync = AVIF_TRUE;
+    }
+    return AVIF_TRUE;
+}
+
 // ---------------------------------------------------------------------------
 // Helper macros / functions
 
@@ -487,6 +616,9 @@ typedef struct avifTile
     avifCodecDecodeInput * input;
     struct avifCodec * codec;
     avifImage * image;
+    uint32_t width;  // Either avifTrack.width or avifDecoderItem.width
+    uint32_t height; // Either avifTrack.height or avifDecoderItem.height
+    uint8_t operatingPoint;
 } avifTile;
 AVIF_ARRAY_DECLARE(avifTileArray, avifTile, tile);
 
@@ -514,10 +646,11 @@ typedef struct avifMeta
     // (ipma) box.
     avifPropertyArray properties;
 
-    // Filled with the contents of "idat" boxes, which are raw data that an item can directly refer to in its
-    // item location box (iloc) instead of just giving an offset into the overall file. If all items' iloc boxes
-    // simply point at an offset/length in the file itself, this array will likely be empty.
-    avifDecoderItemDataArray idats;
+    // Filled with the contents of this meta box's "idat" box, which is raw data that an item can
+    // directly refer to in its item location box (iloc) instead of just giving an offset into the
+    // overall file. If all items' iloc boxes simply point at an offset/length in the file itself,
+    // this buffer will likely be empty.
+    avifRWData idat;
 
     // Ever-incrementing ID for uniquely identifying which 'meta' box contains an idat (when
     // multiple meta boxes exist as BMFF siblings). Each time avifParseMetaBox() is called on an
@@ -539,7 +672,6 @@ static avifMeta * avifMetaCreate()
     memset(meta, 0, sizeof(avifMeta));
     avifArrayCreate(&meta->items, sizeof(avifDecoderItem), 8);
     avifArrayCreate(&meta->properties, sizeof(avifProperty), 16);
-    avifArrayCreate(&meta->idats, sizeof(avifDecoderItemData), 1);
     return meta;
 }
 
@@ -555,11 +687,7 @@ static void avifMetaDestroy(avifMeta * meta)
     }
     avifArrayDestroy(&meta->items);
     avifArrayDestroy(&meta->properties);
-    for (uint32_t i = 0; i < meta->idats.count; ++i) {
-        avifDecoderItemData * idat = &meta->idats.idat[i];
-        avifRWDataFree(&idat->data);
-    }
-    avifArrayDestroy(&meta->idats);
+    avifRWDataFree(&meta->idat);
     avifFree(meta);
 }
 
@@ -593,6 +721,7 @@ typedef struct avifDecoderData
     avifImageGrid colorGrid;
     avifImageGrid alphaGrid;
     avifDecoderSource source;
+    uint8_t majorBrand[4];                     // From the file's ftyp, used by AVIF_DECODER_SOURCE_AUTO
     avifDiagnostics * diag;                    // Shallow copy; owned by avifDecoder
     const avifSampleTable * sourceSampleTable; // NULL unless (source == AVIF_DECODER_SOURCE_TRACKS), owned by an avifTrack
     avifBool cicpSet;                          // True if avifDecoder's image has had its CICP set correctly yet.
@@ -627,11 +756,14 @@ static void avifDecoderDataResetCodec(avifDecoderData * data)
     }
 }
 
-static avifTile * avifDecoderDataCreateTile(avifDecoderData * data)
+static avifTile * avifDecoderDataCreateTile(avifDecoderData * data, uint32_t width, uint32_t height, uint8_t operatingPoint)
 {
     avifTile * tile = (avifTile *)avifArrayPushPtr(&data->tiles);
     tile->image = avifImageCreateEmpty();
     tile->input = avifCodecDecodeInputCreate();
+    tile->width = width;
+    tile->height = height;
+    tile->operatingPoint = operatingPoint;
     return tile;
 }
 
@@ -685,22 +817,19 @@ static void avifDecoderDataDestroy(avifDecoderData * data)
 // This returns the max extent that has to be read in order to decode this item. If
 // the item is stored in an idat, the data has already been read during Parse() and
 // this function will return AVIF_RESULT_OK with a 0-byte extent.
-static avifResult avifDecoderItemMaxExtent(const avifDecoderItem * item, avifExtent * outExtent)
+static avifResult avifDecoderItemMaxExtent(const avifDecoderItem * item, const avifDecodeSample * sample, avifExtent * outExtent)
 {
     if (item->extents.count == 0) {
         return AVIF_RESULT_TRUNCATED_DATA;
     }
 
-    if (item->idatID != 0) {
+    if (item->idatStored) {
         // construction_method: idat(1)
 
-        // Find associated idat box
-        for (uint32_t i = 0; i < item->meta->idats.count; ++i) {
-            if (item->meta->idats.idat[i].id == item->idatID) {
-                // Already read from a meta box during Parse()
-                memset(outExtent, 0, sizeof(avifExtent));
-                return AVIF_RESULT_OK;
-            }
+        if (item->meta->idat.size > 0) {
+            // Already read from a meta box during Parse()
+            memset(outExtent, 0, sizeof(avifExtent));
+            return AVIF_RESULT_OK;
         }
 
         // no associated idat box was found in the meta box, bail out
@@ -709,6 +838,12 @@ static avifResult avifDecoderItemMaxExtent(const avifDecoderItem * item, avifExt
 
     // construction_method: file(0)
 
+    if (sample->size == 0) {
+        return AVIF_RESULT_TRUNCATED_DATA;
+    }
+    uint64_t remainingOffset = sample->offset;
+    size_t remainingBytes = sample->size; // This may be smaller than item->size if the item is progressive
+
     // Assert that the for loop below will execute at least one iteration.
     assert(item->extents.count != 0);
     uint64_t minOffset = UINT64_MAX;
@@ -716,17 +851,47 @@ static avifResult avifDecoderItemMaxExtent(const avifDecoderItem * item, avifExt
     for (uint32_t extentIter = 0; extentIter < item->extents.count; ++extentIter) {
         avifExtent * extent = &item->extents.extent[extentIter];
 
-        if (extent->size > UINT64_MAX - extent->offset) {
+        // Make local copies of extent->offset and extent->size as they might need to be adjusted
+        // due to the sample's offset.
+        uint64_t startOffset = extent->offset;
+        size_t extentSize = extent->size;
+        if (remainingOffset) {
+            if (remainingOffset >= extentSize) {
+                remainingOffset -= extentSize;
+                continue;
+            } else {
+                if (remainingOffset > UINT64_MAX - startOffset) {
+                    return AVIF_RESULT_BMFF_PARSE_FAILED;
+                }
+                startOffset += remainingOffset;
+                extentSize -= remainingOffset;
+                remainingOffset = 0;
+            }
+        }
+
+        const size_t usedExtentSize = (extentSize < remainingBytes) ? extentSize : remainingBytes;
+
+        if (usedExtentSize > UINT64_MAX - startOffset) {
             return AVIF_RESULT_BMFF_PARSE_FAILED;
         }
-        const uint64_t endOffset = extent->offset + extent->size;
+        const uint64_t endOffset = startOffset + usedExtentSize;
 
-        if (minOffset > extent->offset) {
-            minOffset = extent->offset;
+        if (minOffset > startOffset) {
+            minOffset = startOffset;
         }
         if (maxOffset < endOffset) {
             maxOffset = endOffset;
         }
+
+        remainingBytes -= usedExtentSize;
+        if (remainingBytes == 0) {
+            // We've got enough bytes for this sample.
+            break;
+        }
+    }
+
+    if (remainingBytes != 0) {
+        return AVIF_RESULT_TRUNCATED_DATA;
     }
 
     outExtent->offset = minOffset;
@@ -738,6 +903,15 @@ static avifResult avifDecoderItemMaxExtent(const avifDecoderItem * item, avifExt
     return AVIF_RESULT_OK;
 }
 
+static uint8_t avifDecoderItemOperatingPoint(const avifDecoderItem * item)
+{
+    const avifProperty * a1opProp = avifPropertyArrayFind(&item->properties, "a1op");
+    if (a1opProp) {
+        return a1opProp->u.a1op.opIndex;
+    }
+    return 0; // default
+}
+
 static avifResult avifDecoderItemValidateAV1(const avifDecoderItem * item, avifDiagnostics * diag, const avifStrictFlags strictFlags)
 {
     const avifProperty * av1CProp = avifPropertyArrayFind(&item->properties, "av1C");
@@ -793,11 +967,21 @@ static avifResult avifDecoderItemValidateAV1(const avifDecoderItem * item, avifD
     return AVIF_RESULT_OK;
 }
 
-static avifResult avifDecoderItemRead(avifDecoderItem * item, avifIO * io, avifROData * outData, size_t partialByteCount, avifDiagnostics * diag)
+static avifResult avifDecoderItemRead(avifDecoderItem * item,
+                                      avifIO * io,
+                                      avifROData * outData,
+                                      size_t offset,
+                                      size_t partialByteCount,
+                                      avifDiagnostics * diag)
 {
     if (item->mergedExtents.data && !item->partialMergedExtents) {
         // Multiple extents have already been concatenated for this item, just return it
-        memcpy(outData, &item->mergedExtents, sizeof(avifROData));
+        if (offset >= item->mergedExtents.size) {
+            avifDiagnosticsPrintf(diag, "Item ID %u read has overflowing offset", item->id);
+            return AVIF_RESULT_TRUNCATED_DATA;
+        }
+        outData->data = item->mergedExtents.data + offset;
+        outData->size = item->mergedExtents.size - offset;
         return AVIF_RESULT_OK;
     }
 
@@ -808,18 +992,12 @@ static avifResult avifDecoderItemRead(avifDecoderItem * item, avifIO * io, avifR
 
     // Find this item's source of all extents' data, based on the construction method
     const avifRWData * idatBuffer = NULL;
-    if (item->idatID != 0) {
+    if (item->idatStored) {
         // construction_method: idat(1)
 
-        // Find associated idat box
-        for (uint32_t i = 0; i < item->meta->idats.count; ++i) {
-            if (item->meta->idats.idat[i].id == item->idatID) {
-                idatBuffer = &item->meta->idats.idat[i].data;
-                break;
-            }
-        }
-
-        if (idatBuffer == NULL) {
+        if (item->meta->idat.size > 0) {
+            idatBuffer = &item->meta->idat;
+        } else {
             // no associated idat box was found in the meta box, bail out
             avifDiagnosticsPrintf(diag, "Item ID %u is stored in an idat, but no associated idat box was found", item->id);
             return AVIF_RESULT_NO_CONTENT;
@@ -833,10 +1011,13 @@ static avifResult avifDecoderItemRead(avifDecoderItem * item, avifIO * io, avifR
         return AVIF_RESULT_TRUNCATED_DATA;
     }
 
-    size_t totalBytesToRead = item->size;
-    if (partialByteCount && (totalBytesToRead > partialByteCount)) {
-        totalBytesToRead = partialByteCount;
+    if (offset >= item->size) {
+        avifDiagnosticsPrintf(diag, "Item ID %u read has overflowing offset", item->id);
+        return AVIF_RESULT_TRUNCATED_DATA;
     }
+    const size_t maxOutputSize = item->size - offset;
+    const size_t readOutputSize = (partialByteCount && (partialByteCount < maxOutputSize)) ? partialByteCount : maxOutputSize;
+    const size_t totalBytesToRead = offset + readOutputSize;
 
     // If there is a single extent for this item and the source of the read buffer is going to be
     // persistent for the lifetime of the avifDecoder (whether it comes from its own internal
@@ -844,7 +1025,13 @@ static avifResult avifDecoderItemRead(avifDecoderItem * item, avifIO * io, avifR
     // preexisting buffer.
     avifBool singlePersistentBuffer = ((item->extents.count == 1) && (idatBuffer || io->persistent));
     if (!singlePersistentBuffer) {
-        avifRWDataRealloc(&item->mergedExtents, totalBytesToRead);
+        // Always allocate the item's full size here, as progressive image decodes will do partial
+        // reads into this buffer and begin feeding the buffer to the underlying AV1 decoder, but
+        // will then write more into this buffer without flushing the AV1 decoder (which is still
+        // holding the address of the previous allocation of this buffer). This strategy avoids
+        // use-after-free issues in the AV1 decoder and unnecessary reallocs as a typical
+        // progressive decode use case will eventually decode the final layer anyway.
+        avifRWDataRealloc(&item->mergedExtents, item->size);
         item->ownsMergedExtents = AVIF_TRUE;
     }
 
@@ -919,13 +1106,13 @@ static avifResult avifDecoderItemRead(avifDecoderItem * item, avifIO * io, avifR
         return AVIF_RESULT_TRUNCATED_DATA;
     }
 
-    outData->data = item->mergedExtents.data;
-    outData->size = totalBytesToRead;
+    outData->data = item->mergedExtents.data + offset;
+    outData->size = readOutputSize;
     item->partialMergedExtents = (item->size != totalBytesToRead);
     return AVIF_RESULT_OK;
 }
 
-static avifBool avifDecoderDataGenerateImageGridTiles(avifDecoderData * data, avifImageGrid * grid, avifDecoderItem * gridItem, avifBool alpha)
+static avifBool avifDecoderGenerateImageGridTiles(avifDecoder * decoder, avifImageGrid * grid, avifDecoderItem * gridItem, avifBool alpha)
 {
     unsigned int tilesRequested = grid->rows * grid->columns;
 
@@ -940,7 +1127,7 @@ static avifBool avifDecoderDataGenerateImageGridTiles(avifDecoderData * data, av
             if (item->hasUnsupportedEssentialProperty) {
                 // An essential property isn't supported by libavif; can't
                 // decode a grid image if any tile in the grid isn't supported.
-                avifDiagnosticsPrintf(data->diag, "Grid image contains tile with an unsupported property marked as essential");
+                avifDiagnosticsPrintf(&decoder->diag, "Grid image contains tile with an unsupported property marked as essential");
                 return AVIF_FALSE;
             }
 
@@ -949,7 +1136,7 @@ static avifBool avifDecoderDataGenerateImageGridTiles(avifDecoderData * data, av
     }
 
     if (tilesRequested != tilesAvailable) {
-        avifDiagnosticsPrintf(data->diag,
+        avifDiagnosticsPrintf(&decoder->diag,
                               "Grid image of dimensions %ux%u requires %u tiles, and only %u were found",
                               grid->columns,
                               grid->rows,
@@ -966,12 +1153,15 @@ static avifBool avifDecoderDataGenerateImageGridTiles(avifDecoderData * data, av
                 continue;
             }
 
-            avifTile * tile = avifDecoderDataCreateTile(data);
-            avifDecodeSample * sample = (avifDecodeSample *)avifArrayPushPtr(&tile->input->samples);
-            sample->itemID = item->id;
-            sample->offset = 0;
-            sample->size = item->size;
-            sample->sync = AVIF_TRUE;
+            avifTile * tile = avifDecoderDataCreateTile(decoder->data, item->width, item->height, avifDecoderItemOperatingPoint(item));
+            if (!avifCodecDecodeInputFillFromDecoderItem(tile->input,
+                                                         item,
+                                                         decoder->allowProgressive,
+                                                         decoder->imageCountLimit,
+                                                         decoder->io->sizeHint,
+                                                         &decoder->diag)) {
+                return AVIF_FALSE;
+            }
             tile->input->alpha = alpha;
 
             if (firstTile) {
@@ -981,11 +1171,19 @@ static avifBool avifDecoderDataGenerateImageGridTiles(avifDecoderData * data, av
                 // the top-level color/alpha item during avifDecoderReset().
                 const avifProperty * srcProp = avifPropertyArrayFind(&item->properties, "av1C");
                 if (!srcProp) {
-                    avifDiagnosticsPrintf(data->diag, "Grid image's first tile is missing an av1C property");
+                    avifDiagnosticsPrintf(&decoder->diag, "Grid image's first tile is missing an av1C property");
                     return AVIF_FALSE;
                 }
                 avifProperty * dstProp = (avifProperty *)avifArrayPushPtr(&gridItem->properties);
                 memcpy(dstProp, srcProp, sizeof(avifProperty));
+
+                if (!alpha && item->progressive) {
+                    decoder->progressiveState = AVIF_PROGRESSIVE_STATE_AVAILABLE;
+                    if (tile->input->samples.count > 1) {
+                        decoder->progressiveState = AVIF_PROGRESSIVE_STATE_ACTIVE;
+                        decoder->imageCount = tile->input->samples.count;
+                    }
+                }
             }
         }
     }
@@ -1198,7 +1396,7 @@ static avifResult avifDecoderFindMetadata(avifDecoder * decoder, avifMeta * meta
 
         if (!decoder->ignoreExif && !memcmp(item->type, "Exif", 4)) {
             avifROData exifContents;
-            avifResult readResult = avifDecoderItemRead(item, decoder->io, &exifContents, 0, &decoder->diag);
+            avifResult readResult = avifDecoderItemRead(item, decoder->io, &exifContents, 0, 0, &decoder->diag);
             if (readResult != AVIF_RESULT_OK) {
                 return readResult;
             }
@@ -1212,7 +1410,7 @@ static avifResult avifDecoderFindMetadata(avifDecoder * decoder, avifMeta * meta
         } else if (!decoder->ignoreXMP && !memcmp(item->type, "mime", 4) &&
                    !memcmp(item->contentType.contentType, xmpContentType, xmpContentTypeSize)) {
             avifROData xmpContents;
-            avifResult readResult = avifDecoderItemRead(item, decoder->io, &xmpContents, 0, &decoder->diag);
+            avifResult readResult = avifDecoderItemRead(item, decoder->io, &xmpContents, 0, 0, &decoder->diag);
             if (readResult != AVIF_RESULT_OK) {
                 return readResult;
             }
@@ -1303,7 +1501,6 @@ static avifBool avifParseItemLocationBox(avifMeta * meta, const uint8_t * raw, s
     }
     for (uint32_t i = 0; i < itemCount; ++i) {
         uint32_t itemID;
-        uint32_t idatID = 0;
         if (version < 2) {
             CHECK(avifROStreamReadU16(&s, &tmp16)); // unsigned int(16) item_ID;
             itemID = tmp16;
@@ -1311,6 +1508,17 @@ static avifBool avifParseItemLocationBox(avifMeta * meta, const uint8_t * raw, s
             CHECK(avifROStreamReadU32(&s, &itemID)); // unsigned int(32) item_ID;
         }
 
+        avifDecoderItem * item = avifMetaFindItem(meta, itemID);
+        if (!item) {
+            avifDiagnosticsPrintf(diag, "Box[iloc] has an invalid item ID [%u]", itemID);
+            return AVIF_FALSE;
+        }
+        if (item->extents.count > 0) {
+            // This item has already been given extents via this iloc box. This is invalid.
+            avifDiagnosticsPrintf(diag, "Item ID [%u] contains duplicate sets of extents", itemID);
+            return AVIF_FALSE;
+        }
+
         if ((version == 1) || (version == 2)) {
             uint8_t ignored;
             uint8_t constructionMethod;
@@ -1323,22 +1531,10 @@ static avifBool avifParseItemLocationBox(avifMeta * meta, const uint8_t * raw, s
                 return AVIF_FALSE;
             }
             if (constructionMethod == 1) {
-                idatID = meta->idatID;
+                item->idatStored = AVIF_TRUE;
             }
         }
 
-        avifDecoderItem * item = avifMetaFindItem(meta, itemID);
-        if (!item) {
-            avifDiagnosticsPrintf(diag, "Box[iloc] has an invalid item ID [%u]", itemID);
-            return AVIF_FALSE;
-        }
-        if (item->extents.count > 0) {
-            // This item has already been given extents via this iloc box. This is invalid.
-            avifDiagnosticsPrintf(diag, "Item ID [%u] contains duplicate sets of extents", itemID);
-            return AVIF_FALSE;
-        }
-        item->idatID = idatID;
-
         uint16_t dataReferenceIndex;                                 // unsigned int(16) data_ref rence_index;
         CHECK(avifROStreamReadU16(&s, &dataReferenceIndex));         //
         uint64_t baseOffset;                                         // unsigned int(base_offset_size*8) base_offset;
@@ -1386,7 +1582,7 @@ static avifBool avifParseItemLocationBox(avifMeta * meta, const uint8_t * raw, s
     return AVIF_TRUE;
 }
 
-static avifBool avifParseImageGridBox(avifImageGrid * grid, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
+static avifBool avifParseImageGridBox(avifImageGrid * grid, const uint8_t * raw, size_t rawLen, uint32_t imageSizeLimit, avifDiagnostics * diag)
 {
     BEGIN_STREAM(s, raw, rawLen, diag, "Box[grid]");
 
@@ -1419,10 +1615,14 @@ static avifBool avifParseImageGridBox(avifImageGrid * grid, const uint8_t * raw,
         CHECK(avifROStreamReadU32(&s, &grid->outputWidth));  // unsigned int(FieldLength) output_width;
         CHECK(avifROStreamReadU32(&s, &grid->outputHeight)); // unsigned int(FieldLength) output_height;
     }
-    if ((grid->outputWidth == 0) || (grid->outputHeight == 0) || (grid->outputWidth > (AVIF_MAX_IMAGE_SIZE / grid->outputHeight))) {
+    if ((grid->outputWidth == 0) || (grid->outputHeight == 0)) {
         avifDiagnosticsPrintf(diag, "Grid box contains illegal dimensions: [%u x %u]", grid->outputWidth, grid->outputHeight);
         return AVIF_FALSE;
     }
+    if (grid->outputWidth > (imageSizeLimit / grid->outputHeight)) {
+        avifDiagnosticsPrintf(diag, "Grid box dimensions are too large: [%u x %u]", grid->outputWidth, grid->outputHeight);
+        return AVIF_FALSE;
+    }
     return avifROStreamRemainingBytes(&s) == 0;
 }
 
@@ -1579,6 +1779,59 @@ static avifBool avifParsePixelInformationProperty(avifProperty * prop, const uin
     return AVIF_TRUE;
 }
 
+static avifBool avifParseOperatingPointSelectorProperty(avifProperty * prop, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
+{
+    BEGIN_STREAM(s, raw, rawLen, diag, "Box[a1op]");
+
+    avifOperatingPointSelectorProperty * a1op = &prop->u.a1op;
+    CHECK(avifROStreamRead(&s, &a1op->opIndex, 1));
+    if (a1op->opIndex > 31) { // 31 is AV1's max operating point value
+        avifDiagnosticsPrintf(diag, "Box[a1op] contains an unsupported operating point [%u]", a1op->opIndex);
+        return AVIF_FALSE;
+    }
+    return AVIF_TRUE;
+}
+
+static avifBool avifParseLayerSelectorProperty(avifProperty * prop, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
+{
+    BEGIN_STREAM(s, raw, rawLen, diag, "Box[lsel]");
+
+    avifLayerSelectorProperty * lsel = &prop->u.lsel;
+    CHECK(avifROStreamReadU16(&s, &lsel->layerID));
+    if (lsel->layerID >= MAX_AV1_LAYER_COUNT) {
+        avifDiagnosticsPrintf(diag, "Box[lsel] contains an unsupported layer [%u]", lsel->layerID);
+        return AVIF_FALSE;
+    }
+    return AVIF_TRUE;
+}
+
+static avifBool avifParseAV1LayeredImageIndexingProperty(avifProperty * prop, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
+{
+    BEGIN_STREAM(s, raw, rawLen, diag, "Box[a1lx]");
+
+    avifAV1LayeredImageIndexingProperty * a1lx = &prop->u.a1lx;
+
+    uint8_t largeSize = 0;
+    CHECK(avifROStreamRead(&s, &largeSize, 1));
+    if (largeSize & 0xFE) {
+        avifDiagnosticsPrintf(diag, "Box[a1lx] has bits set in the reserved section [%u]", largeSize);
+        return AVIF_FALSE;
+    }
+
+    for (int i = 0; i < 3; ++i) {
+        if (largeSize) {
+            CHECK(avifROStreamReadU32(&s, &a1lx->layerSize[i]));
+        } else {
+            uint16_t layerSize16;
+            CHECK(avifROStreamReadU16(&s, &layerSize16));
+            a1lx->layerSize[i] = (uint32_t)layerSize16;
+        }
+    }
+
+    // Layer sizes will be validated layer (when the item's size is known)
+    return AVIF_TRUE;
+}
+
 static avifBool avifParseItemPropertyContainerBox(avifPropertyArray * properties, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
 {
     BEGIN_STREAM(s, raw, rawLen, diag, "Box[ipco]");
@@ -1608,6 +1861,12 @@ static avifBool avifParseItemPropertyContainerBox(avifPropertyArray * properties
             CHECK(avifParseImageMirrorProperty(prop, avifROStreamCurrent(&s), header.size, diag));
         } else if (!memcmp(header.type, "pixi", 4)) {
             CHECK(avifParsePixelInformationProperty(prop, avifROStreamCurrent(&s), header.size, diag));
+        } else if (!memcmp(header.type, "a1op", 4)) {
+            CHECK(avifParseOperatingPointSelectorProperty(prop, avifROStreamCurrent(&s), header.size, diag));
+        } else if (!memcmp(header.type, "lsel", 4)) {
+            CHECK(avifParseLayerSelectorProperty(prop, avifROStreamCurrent(&s), header.size, diag));
+        } else if (!memcmp(header.type, "a1lx", 4)) {
+            CHECK(avifParseAV1LayeredImageIndexingProperty(prop, avifROStreamCurrent(&s), header.size, diag));
         }
 
         CHECK(avifROStreamSkip(&s, header.size));
@@ -1694,7 +1953,8 @@ static avifBool avifParseItemPropertyAssociation(avifMeta * meta, const uint8_t
             // Copy property to item
             avifProperty * srcProp = &meta->properties.prop[propertyIndex];
 
-            static const char * supportedTypes[] = { "ispe", "auxC", "colr", "av1C", "pasp", "clap", "irot", "imir", "pixi" };
+            static const char * supportedTypes[] = { "ispe", "auxC", "colr", "av1C", "pasp", "clap",
+                                                     "irot", "imir", "pixi", "a1op", "lsel", "a1lx" };
             size_t supportedTypesCount = sizeof(supportedTypes) / sizeof(supportedTypes[0]);
             avifBool supportedType = AVIF_FALSE;
             for (size_t i = 0; i < supportedTypesCount; ++i) {
@@ -1704,6 +1964,52 @@ static avifBool avifParseItemPropertyAssociation(avifMeta * meta, const uint8_t
                 }
             }
             if (supportedType) {
+                if (essential) {
+                    // Verify that it is legal for this property to be flagged as essential. Any
+                    // types in this list are *required* in the spec to not be flagged as essential
+                    // when associated with an item.
+                    static const char * const nonessentialTypes[] = {
+
+                        // AVIF: Section 2.3.2.3.2: "If associated, it shall not be marked as essential."
+                        "a1lx"
+
+                    };
+                    size_t nonessentialTypesCount = sizeof(nonessentialTypes) / sizeof(nonessentialTypes[0]);
+                    for (size_t i = 0; i < nonessentialTypesCount; ++i) {
+                        if (!memcmp(srcProp->type, nonessentialTypes[i], 4)) {
+                            avifDiagnosticsPrintf(diag,
+                                                  "Item ID [%u] has a %s property association which must not be marked essential, but is",
+                                                  itemID,
+                                                  nonessentialTypes[i]);
+                            return AVIF_FALSE;
+                        }
+                    }
+                } else {
+                    // Verify that it is legal for this property to not be flagged as essential. Any
+                    // types in this list are *required* in the spec to be flagged as essential when
+                    // associated with an item.
+                    static const char * const essentialTypes[] = {
+
+                        // AVIF: Section 2.3.2.1.1: "If associated, it shall be marked as essential."
+                        "a1op",
+
+                        // HEIF: Section 6.5.11.1: "essential shall be equal to 1 for an 'lsel' item property."
+                        "lsel"
+
+                    };
+                    size_t essentialTypesCount = sizeof(essentialTypes) / sizeof(essentialTypes[0]);
+                    for (size_t i = 0; i < essentialTypesCount; ++i) {
+                        if (!memcmp(srcProp->type, essentialTypes[i], 4)) {
+                            avifDiagnosticsPrintf(diag,
+                                                  "Item ID [%u] has a %s property association which must be marked essential, but is not",
+                                                  itemID,
+                                                  essentialTypes[i]);
+                            return AVIF_FALSE;
+                        }
+                    }
+                }
+
+                // Supported and valid; associate it with this item.
                 avifProperty * dstProp = (avifProperty *)avifArrayPushPtr(&item->properties);
                 memcpy(dstProp, srcProp, sizeof(avifProperty));
             } else {
@@ -1715,7 +2021,6 @@ static avifBool avifParseItemPropertyAssociation(avifMeta * meta, const uint8_t
             }
         }
     }
-
     return AVIF_TRUE;
 }
 
@@ -1745,17 +2050,16 @@ static avifBool avifParsePrimaryItemBox(avifMeta * meta, const uint8_t * raw, si
 static avifBool avifParseItemDataBox(avifMeta * meta, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
 {
     // Check to see if we've already seen an idat box for this meta box. If so, bail out
-    for (uint32_t i = 0; i < meta->idats.count; ++i) {
-        if (meta->idats.idat[i].id == meta->idatID) {
-            avifDiagnosticsPrintf(diag, "Meta box contains multiple idat boxes");
-            return AVIF_FALSE;
-        }
+    if (meta->idat.size > 0) {
+        avifDiagnosticsPrintf(diag, "Meta box contains multiple idat boxes");
+        return AVIF_FALSE;
+    }
+    if (rawLen == 0) {
+        avifDiagnosticsPrintf(diag, "idat box has a length of 0");
+        return AVIF_FALSE;
     }
 
-    int index = avifArrayPushIndex(&meta->idats);
-    avifDecoderItemData * idat = &meta->idats.idat[index];
-    idat->id = meta->idatID;
-    avifRWDataSet(&idat->data, raw, rawLen);
+    avifRWDataSet(&meta->idat, raw, rawLen);
     return AVIF_TRUE;
 }
 
@@ -2005,7 +2309,7 @@ static avifBool avifParseMetaBox(avifMeta * meta, const uint8_t * raw, size_t ra
     return AVIF_TRUE;
 }
 
-static avifBool avifParseTrackHeaderBox(avifTrack * track, const uint8_t * raw, size_t rawLen, avifDiagnostics * diag)
+static avifBool avifParseTrackHeaderBox(avifTrack * track, const uint8_t * raw, size_t rawLen, uint32_t imageSizeLimit, avifDiagnostics * diag)
 {
     BEGIN_STREAM(s, raw, rawLen, diag, "Box[tkhd]");
 
@@ -2048,6 +2352,15 @@ static avifBool avifParseTrackHeaderBox(avifTrack * track, const uint8_t * raw,
     track->width = width >> 16;
     track->height = height >> 16;
 
+    if ((track->width == 0) || (track->height == 0)) {
+        avifDiagnosticsPrintf(diag, "Track ID [%u] has an invalid size [%ux%u]", track->id, track->width, track->height);
+        return AVIF_FALSE;
+    }
+    if (track->width > (imageSizeLimit / track->height)) {
+        avifDiagnosticsPrintf(diag, "Track ID [%u] size is too large [%ux%u]", track->id, track->width, track->height);
+        return AVIF_FALSE;
+    }
+
     // TODO: support scaling based on width/height track header info?
 
     track->id = trackID;
@@ -2322,7 +2635,7 @@ static avifBool avifTrackReferenceBox(avifTrack * track, const uint8_t * raw, si
     return AVIF_TRUE;
 }
 
-static avifBool avifParseTrackBox(avifDecoderData * data, const uint8_t * raw, size_t rawLen)
+static avifBool avifParseTrackBox(avifDecoderData * data, const uint8_t * raw, size_t rawLen, uint32_t imageSizeLimit)
 {
     BEGIN_STREAM(s, raw, rawLen, data->diag, "Box[trak]");
 
@@ -2333,7 +2646,7 @@ static avifBool avifParseTrackBox(avifDecoderData * data, const uint8_t * raw, s
         CHECK(avifROStreamReadBoxHeader(&s, &header));
 
         if (!memcmp(header.type, "tkhd", 4)) {
-            CHECK(avifParseTrackHeaderBox(track, avifROStreamCurrent(&s), header.size, data->diag));
+            CHECK(avifParseTrackHeaderBox(track, avifROStreamCurrent(&s), header.size, imageSizeLimit, data->diag));
         } else if (!memcmp(header.type, "meta", 4)) {
             CHECK(avifParseMetaBox(track->meta, avifROStreamCurrent(&s), header.size, data->diag));
         } else if (!memcmp(header.type, "mdia", 4)) {
@@ -2347,7 +2660,7 @@ static avifBool avifParseTrackBox(avifDecoderData * data, const uint8_t * raw, s
     return AVIF_TRUE;
 }
 
-static avifBool avifParseMoovBox(avifDecoderData * data, const uint8_t * raw, size_t rawLen)
+static avifBool avifParseMovieBox(avifDecoderData * data, const uint8_t * raw, size_t rawLen, uint32_t imageSizeLimit)
 {
     BEGIN_STREAM(s, raw, rawLen, data->diag, "Box[moov]");
 
@@ -2356,7 +2669,7 @@ static avifBool avifParseMoovBox(avifDecoderData * data, const uint8_t * raw, si
         CHECK(avifROStreamReadBoxHeader(&s, &header));
 
         if (!memcmp(header.type, "trak", 4)) {
-            CHECK(avifParseTrackBox(data, avifROStreamCurrent(&s), header.size));
+            CHECK(avifParseTrackBox(data, avifROStreamCurrent(&s), header.size, imageSizeLimit));
         }
 
         CHECK(avifROStreamSkip(&s, header.size));
@@ -2449,6 +2762,7 @@ static avifResult avifParse(avifDecoder * decoder)
                 return AVIF_RESULT_INVALID_FTYP;
             }
             ftypSeen = AVIF_TRUE;
+            memcpy(data->majorBrand, ftyp.majorBrand, 4); // Remember the major brand for future AVIF_DECODER_SOURCE_AUTO decisions
             needsMeta = avifFileTypeHasBrand(&ftyp, "avif");
             needsMoov = avifFileTypeHasBrand(&ftyp, "avis");
         } else if (!memcmp(header.type, "meta", 4)) {
@@ -2457,7 +2771,7 @@ static avifResult avifParse(avifDecoder * decoder)
             metaSeen = AVIF_TRUE;
         } else if (!memcmp(header.type, "moov", 4)) {
             CHECKERR(!moovSeen, AVIF_RESULT_BMFF_PARSE_FAILED);
-            CHECKERR(avifParseMoovBox(data, boxContents.data, boxContents.size), AVIF_RESULT_BMFF_PARSE_FAILED);
+            CHECKERR(avifParseMovieBox(data, boxContents.data, boxContents.size, decoder->imageSizeLimit), AVIF_RESULT_BMFF_PARSE_FAILED);
             moovSeen = AVIF_TRUE;
         }
 
@@ -2525,6 +2839,7 @@ avifDecoder * avifDecoderCreate(void)
     avifDecoder * decoder = (avifDecoder *)avifAlloc(sizeof(avifDecoder));
     memset(decoder, 0, sizeof(avifDecoder));
     decoder->maxThreads = 1;
+    decoder->imageSizeLimit = AVIF_DEFAULT_IMAGE_SIZE_LIMIT;
     decoder->imageCountLimit = AVIF_DEFAULT_IMAGE_COUNT_LIMIT;
     decoder->strictFlags = AVIF_STRICT_ENABLED;
     return decoder;
@@ -2631,7 +2946,7 @@ avifResult avifDecoderNthImageMaxExtent(const avifDecoder * decoder, uint32_t fr
                 // The data comes from an item. Let avifDecoderItemMaxExtent() do the heavy lifting.
 
                 avifDecoderItem * item = avifMetaFindItem(decoder->data->meta, sample->itemID);
-                avifResult maxExtentResult = avifDecoderItemMaxExtent(item, &sampleExtent);
+                avifResult maxExtentResult = avifDecoderItemMaxExtent(item, sample, &sampleExtent);
                 if (maxExtentResult != AVIF_RESULT_OK) {
                     return maxExtentResult;
                 }
@@ -2660,12 +2975,17 @@ static avifResult avifDecoderPrepareSample(avifDecoder * decoder, avifDecodeSamp
     if (!sample->data.size || sample->partialData) {
         // This sample hasn't been read from IO or had its extents fully merged yet.
 
+        size_t bytesToRead = sample->size;
+        if (partialByteCount && (bytesToRead > partialByteCount)) {
+            bytesToRead = partialByteCount;
+        }
+
         if (sample->itemID) {
             // The data comes from an item. Let avifDecoderItemRead() do the heavy lifting.
 
             avifDecoderItem * item = avifMetaFindItem(decoder->data->meta, sample->itemID);
             avifROData itemContents;
-            avifResult readResult = avifDecoderItemRead(item, decoder->io, &itemContents, partialByteCount, &decoder->diag);
+            avifResult readResult = avifDecoderItemRead(item, decoder->io, &itemContents, sample->offset, bytesToRead, &decoder->diag);
             if (readResult != AVIF_RESULT_OK) {
                 return readResult;
             }
@@ -2678,11 +2998,6 @@ static avifResult avifDecoderPrepareSample(avifDecoder * decoder, avifDecodeSamp
         } else {
             // The data likely comes from a sample table. Pull the sample and make a copy if necessary.
 
-            size_t bytesToRead = sample->size;
-            if (partialByteCount && (bytesToRead > partialByteCount)) {
-                bytesToRead = partialByteCount;
-            }
-
             avifROData sampleContents;
             if ((decoder->io->sizeHint > 0) && (sample->offset > decoder->io->sizeHint)) {
                 return AVIF_RESULT_BMFF_PARSE_FAILED;
@@ -2711,6 +3026,11 @@ avifResult avifDecoderParse(avifDecoder * decoder)
 {
     avifDiagnosticsClearError(&decoder->diag);
 
+    // An imageSizeLimit greater than AVIF_DEFAULT_IMAGE_SIZE_LIMIT and the special value of 0 to
+    // disable the limit are not yet implemented.
+    if ((decoder->imageSizeLimit > AVIF_DEFAULT_IMAGE_SIZE_LIMIT) || (decoder->imageSizeLimit == 0)) {
+        return AVIF_RESULT_NOT_IMPLEMENTED;
+    }
     if (!decoder->io || !decoder->io->read) {
         return AVIF_RESULT_IO_NOT_SET;
     }
@@ -2729,6 +3049,51 @@ avifResult avifDecoderParse(avifDecoder * decoder)
         return parseResult;
     }
 
+    // Walk the decoded items (if any) and harvest ispe
+    avifDecoderData * data = decoder->data;
+    for (uint32_t itemIndex = 0; itemIndex < data->meta->items.count; ++itemIndex) {
+        avifDecoderItem * item = &data->meta->items.item[itemIndex];
+        if (!item->size) {
+            continue;
+        }
+        if (item->hasUnsupportedEssentialProperty) {
+            // An essential property isn't supported by libavif; ignore the item.
+            continue;
+        }
+        avifBool isGrid = (memcmp(item->type, "grid", 4) == 0);
+        if (memcmp(item->type, "av01", 4) && !isGrid) {
+            // probably exif or some other data
+            continue;
+        }
+
+        const avifProperty * ispeProp = avifPropertyArrayFind(&item->properties, "ispe");
+        if (ispeProp) {
+            item->width = ispeProp->u.ispe.width;
+            item->height = ispeProp->u.ispe.height;
+
+            if ((item->width == 0) || (item->height == 0)) {
+                avifDiagnosticsPrintf(data->diag, "Item ID [%u] has an invalid size [%ux%u]", item->id, item->width, item->height);
+                return AVIF_RESULT_BMFF_PARSE_FAILED;
+            }
+            if (item->width > (decoder->imageSizeLimit / item->height)) {
+                avifDiagnosticsPrintf(data->diag, "Item ID [%u] size is too large [%ux%u]", item->id, item->width, item->height);
+                return AVIF_RESULT_BMFF_PARSE_FAILED;
+            }
+        } else {
+            const avifProperty * auxCProp = avifPropertyArrayFind(&item->properties, "auxC");
+            if (auxCProp && isAlphaURN(auxCProp->u.auxC.auxType)) {
+                if (decoder->strictFlags & AVIF_STRICT_ALPHA_ISPE_REQUIRED) {
+                    avifDiagnosticsPrintf(data->diag,
+                                          "[Strict] Alpha auxiliary image item ID [%u] is missing a mandatory ispe property",
+                                          item->id);
+                    return AVIF_RESULT_BMFF_PARSE_FAILED;
+                }
+            } else {
+                avifDiagnosticsPrintf(data->diag, "Item ID [%u] is missing a mandatory ispe property", item->id);
+                return AVIF_RESULT_BMFF_PARSE_FAILED;
+            }
+        }
+    }
     return avifDecoderReset(decoder);
 }
 
@@ -2748,6 +3113,8 @@ static avifResult avifDecoderFlush(avifDecoder * decoder)
             return AVIF_RESULT_NO_CODEC_AVAILABLE;
         }
         tile->codec->diag = &decoder->diag;
+        tile->codec->operatingPoint = tile->operatingPoint;
+        tile->codec->allLayers = tile->input->allLayers;
     }
     return AVIF_RESULT_OK;
 }
@@ -2771,6 +3138,7 @@ avifResult avifDecoderReset(avifDecoder * decoder)
         avifImageDestroy(decoder->image);
     }
     decoder->image = avifImageCreateEmpty();
+    decoder->progressiveState = AVIF_PROGRESSIVE_STATE_UNAVAILABLE;
     data->cicpSet = AVIF_FALSE;
 
     memset(&decoder->ioStats, 0, sizeof(decoder->ioStats));
@@ -2780,7 +3148,12 @@ avifResult avifDecoderReset(avifDecoder * decoder)
 
     data->sourceSampleTable = NULL; // Reset
     if (decoder->requestedSource == AVIF_DECODER_SOURCE_AUTO) {
-        if (data->tracks.count > 0) {
+        // Honor the major brand (avif or avis) if present, otherwise prefer avis (tracks) if possible.
+        if (!memcmp(data->majorBrand, "avis", 4)) {
+            data->source = AVIF_DECODER_SOURCE_TRACKS;
+        } else if (!memcmp(data->majorBrand, "avif", 4)) {
+            data->source = AVIF_DECODER_SOURCE_PRIMARY_ITEM;
+        } else if (data->tracks.count > 0) {
             data->source = AVIF_DECODER_SOURCE_TRACKS;
         } else {
             data->source = AVIF_DECODER_SOURCE_PRIMARY_ITEM;
@@ -2862,19 +3235,23 @@ avifResult avifDecoderReset(avifDecoder * decoder)
             alphaTrack = &data->tracks.track[alphaTrackIndex];
         }
 
-        avifTile * colorTile = avifDecoderDataCreateTile(data);
-        if (!avifCodecDecodeInputGetSamples(colorTile->input, colorTrack->sampleTable, decoder->imageCountLimit, decoder->io->sizeHint, data->diag)) {
+        avifTile * colorTile = avifDecoderDataCreateTile(data, colorTrack->width, colorTrack->height, 0); // No way to set operating point via tracks
+        if (!avifCodecDecodeInputFillFromSampleTable(colorTile->input,
+                                                     colorTrack->sampleTable,
+                                                     decoder->imageCountLimit,
+                                                     decoder->io->sizeHint,
+                                                     data->diag)) {
             return AVIF_RESULT_BMFF_PARSE_FAILED;
         }
         data->colorTileCount = 1;
 
         if (alphaTrack) {
-            avifTile * alphaTile = avifDecoderDataCreateTile(data);
-            if (!avifCodecDecodeInputGetSamples(alphaTile->input,
-                                                alphaTrack->sampleTable,
-                                                decoder->imageCountLimit,
-                                                decoder->io->sizeHint,
-                                                data->diag)) {
+            avifTile * alphaTile = avifDecoderDataCreateTile(data, alphaTrack->width, alphaTrack->height, 0); // No way to set operating point via tracks
+            if (!avifCodecDecodeInputFillFromSampleTable(alphaTile->input,
+                                                         alphaTrack->sampleTable,
+                                                         decoder->imageCountLimit,
+                                                         decoder->io->sizeHint,
+                                                         data->diag)) {
                 return AVIF_RESULT_BMFF_PARSE_FAILED;
             }
             alphaTile->input->alpha = AVIF_TRUE;
@@ -2938,11 +3315,11 @@ avifResult avifDecoderReset(avifDecoder * decoder)
 
             if (isGrid) {
                 avifROData readData;
-                avifResult readResult = avifDecoderItemRead(item, decoder->io, &readData, 0, data->diag);
+                avifResult readResult = avifDecoderItemRead(item, decoder->io, &readData, 0, 0, data->diag);
                 if (readResult != AVIF_RESULT_OK) {
                     return readResult;
                 }
-                if (!avifParseImageGridBox(&data->colorGrid, readData.data, readData.size, data->diag)) {
+                if (!avifParseImageGridBox(&data->colorGrid, readData.data, readData.size, decoder->imageSizeLimit, data->diag)) {
                     return AVIF_RESULT_INVALID_IMAGE_GRID;
                 }
             }
@@ -2978,11 +3355,11 @@ avifResult avifDecoderReset(avifDecoder * decoder)
             if (auxCProp && isAlphaURN(auxCProp->u.auxC.auxType) && (item->auxForID == colorItem->id)) {
                 if (isGrid) {
                     avifROData readData;
-                    avifResult readResult = avifDecoderItemRead(item, decoder->io, &readData, 0, data->diag);
+                    avifResult readResult = avifDecoderItemRead(item, decoder->io, &readData, 0, 0, data->diag);
                     if (readResult != AVIF_RESULT_OK) {
                         return readResult;
                     }
-                    if (!avifParseImageGridBox(&data->alphaGrid, readData.data, readData.size, data->diag)) {
+                    if (!avifParseImageGridBox(&data->alphaGrid, readData.data, readData.size, decoder->imageSizeLimit, data->diag)) {
                         return AVIF_RESULT_INVALID_IMAGE_GRID;
                     }
                 }
@@ -2998,8 +3375,20 @@ avifResult avifDecoderReset(avifDecoder * decoder)
             return findResult;
         }
 
+        // Set all counts and timing to safe-but-uninteresting values
+        decoder->imageIndex = -1;
+        decoder->imageCount = 1;
+        decoder->imageTiming.timescale = 1;
+        decoder->imageTiming.pts = 0;
+        decoder->imageTiming.ptsInTimescales = 0;
+        decoder->imageTiming.duration = 1;
+        decoder->imageTiming.durationInTimescales = 1;
+        decoder->timescale = 1;
+        decoder->duration = 1;
+        decoder->durationInTimescales = 1;
+
         if ((data->colorGrid.rows > 0) && (data->colorGrid.columns > 0)) {
-            if (!avifDecoderDataGenerateImageGridTiles(data, &data->colorGrid, colorItem, AVIF_FALSE)) {
+            if (!avifDecoderGenerateImageGridTiles(decoder, &data->colorGrid, colorItem, AVIF_FALSE)) {
                 return AVIF_RESULT_INVALID_IMAGE_GRID;
             }
             data->colorTileCount = data->tiles.count;
@@ -3008,18 +3397,37 @@ avifResult avifDecoderReset(avifDecoder * decoder)
                 return AVIF_RESULT_NO_AV1_ITEMS_FOUND;
             }
 
-            avifTile * colorTile = avifDecoderDataCreateTile(data);
-            avifDecodeSample * colorSample = (avifDecodeSample *)avifArrayPushPtr(&colorTile->input->samples);
-            colorSample->itemID = colorItem->id;
-            colorSample->offset = 0;
-            colorSample->size = colorItem->size;
-            colorSample->sync = AVIF_TRUE;
+            avifTile * colorTile =
+                avifDecoderDataCreateTile(data, colorItem->width, colorItem->height, avifDecoderItemOperatingPoint(colorItem));
+            if (!avifCodecDecodeInputFillFromDecoderItem(colorTile->input,
+                                                         colorItem,
+                                                         decoder->allowProgressive,
+                                                         decoder->imageCountLimit,
+                                                         decoder->io->sizeHint,
+                                                         &decoder->diag)) {
+                return AVIF_FALSE;
+            }
             data->colorTileCount = 1;
+
+            if (colorItem->progressive) {
+                decoder->progressiveState = AVIF_PROGRESSIVE_STATE_AVAILABLE;
+                if (colorTile->input->samples.count > 1) {
+                    decoder->progressiveState = AVIF_PROGRESSIVE_STATE_ACTIVE;
+                    decoder->imageCount = colorTile->input->samples.count;
+                }
+            }
         }
 
         if (alphaItem) {
+            if (!alphaItem->width && !alphaItem->height) {
+                // NON-STANDARD: Alpha subimage does not have an ispe property; adopt width/height from color item
+                assert(!(decoder->strictFlags & AVIF_STRICT_ALPHA_ISPE_REQUIRED));
+                alphaItem->width = colorItem->width;
+                alphaItem->height = colorItem->height;
+            }
+
             if ((data->alphaGrid.rows > 0) && (data->alphaGrid.columns > 0)) {
-                if (!avifDecoderDataGenerateImageGridTiles(data, &data->alphaGrid, alphaItem, AVIF_TRUE)) {
+                if (!avifDecoderGenerateImageGridTiles(decoder, &data->alphaGrid, alphaItem, AVIF_TRUE)) {
                     return AVIF_RESULT_INVALID_IMAGE_GRID;
                 }
                 data->alphaTileCount = data->tiles.count - data->colorTileCount;
@@ -3028,40 +3436,26 @@ avifResult avifDecoderReset(avifDecoder * decoder)
                     return AVIF_RESULT_NO_AV1_ITEMS_FOUND;
                 }
 
-                avifTile * alphaTile = avifDecoderDataCreateTile(data);
-                avifDecodeSample * alphaSample = (avifDecodeSample *)avifArrayPushPtr(&alphaTile->input->samples);
-                alphaSample->itemID = alphaItem->id;
-                alphaSample->offset = 0;
-                alphaSample->size = alphaItem->size;
-                alphaSample->sync = AVIF_TRUE;
+                avifTile * alphaTile =
+                    avifDecoderDataCreateTile(data, alphaItem->width, alphaItem->height, avifDecoderItemOperatingPoint(alphaItem));
+                if (!avifCodecDecodeInputFillFromDecoderItem(alphaTile->input,
+                                                             alphaItem,
+                                                             decoder->allowProgressive,
+                                                             decoder->imageCountLimit,
+                                                             decoder->io->sizeHint,
+                                                             &decoder->diag)) {
+                    return AVIF_FALSE;
+                }
                 alphaTile->input->alpha = AVIF_TRUE;
                 data->alphaTileCount = 1;
             }
         }
 
-        // Set all counts and timing to safe-but-uninteresting values
-        decoder->imageIndex = -1;
-        decoder->imageCount = 1;
-        decoder->imageTiming.timescale = 1;
-        decoder->imageTiming.pts = 0;
-        decoder->imageTiming.ptsInTimescales = 0;
-        decoder->imageTiming.duration = 1;
-        decoder->imageTiming.durationInTimescales = 1;
-        decoder->timescale = 1;
-        decoder->duration = 1;
-        decoder->durationInTimescales = 1;
-
         decoder->ioStats.colorOBUSize = colorItem->size;
         decoder->ioStats.alphaOBUSize = alphaItem ? alphaItem->size : 0;
 
-        const avifProperty * ispeProp = avifPropertyArrayFind(colorProperties, "ispe");
-        if (ispeProp) {
-            decoder->image->width = ispeProp->u.ispe.width;
-            decoder->image->height = ispeProp->u.ispe.height;
-        } else {
-            decoder->image->width = 0;
-            decoder->image->height = 0;
-        }
+        decoder->image->width = colorItem->width;
+        decoder->image->height = colorItem->height;
         decoder->alphaPresent = (alphaItem != NULL);
         decoder->image->alphaPremultiplied = decoder->alphaPresent && (colorItem->premByID == alphaItem->id);
 
@@ -3235,8 +3629,17 @@ avifResult avifDecoderNextImage(avifDecoder * decoder)
         const avifDecodeSample * sample = &tile->input->samples.sample[nextImageIndex];
 
         if (!tile->codec->getNextImage(tile->codec, decoder, sample, tile->input->alpha, tile->image)) {
+            avifDiagnosticsPrintf(&decoder->diag, "tile->codec->getNextImage() failed");
             return tile->input->alpha ? AVIF_RESULT_DECODE_ALPHA_FAILED : AVIF_RESULT_DECODE_COLOR_FAILED;
         }
+
+        // Scale the decoded image so that it corresponds to this tile's output dimensions
+        if ((tile->width != tile->image->width) || (tile->height != tile->image->height)) {
+            if (!avifImageScale(tile->image, tile->width, tile->height, decoder->imageSizeLimit, &decoder->diag)) {
+                avifDiagnosticsPrintf(&decoder->diag, "avifImageScale() failed");
+                return tile->input->alpha ? AVIF_RESULT_DECODE_ALPHA_FAILED : AVIF_RESULT_DECODE_COLOR_FAILED;
+            }
+        }
     }
 
     if (decoder->data->tiles.count != (decoder->data->colorTileCount + decoder->data->alphaTileCount)) {
@@ -3252,6 +3655,7 @@ avifResult avifDecoderNextImage(avifDecoder * decoder)
         // Normal (most common) non-grid path. Just steal the planes from the only "tile".
 
         if (decoder->data->colorTileCount != 1) {
+            avifDiagnosticsPrintf(&decoder->diag, "decoder->data->colorTileCount should be 1 but is %u", decoder->data->colorTileCount);
             return AVIF_RESULT_DECODE_COLOR_FAILED;
         }
 
@@ -3295,12 +3699,14 @@ avifResult avifDecoderNextImage(avifDecoder * decoder)
             avifImageFreePlanes(decoder->image, AVIF_PLANES_A); // no alpha
         } else {
             if (decoder->data->alphaTileCount != 1) {
+                avifDiagnosticsPrintf(&decoder->diag, "decoder->data->alphaTileCount should be 1 but is %u", decoder->data->alphaTileCount);
                 return AVIF_RESULT_DECODE_ALPHA_FAILED;
             }
 
             avifImage * srcAlpha = decoder->data->tiles.tile[decoder->data->colorTileCount].image;
             if ((decoder->image->width != srcAlpha->width) || (decoder->image->height != srcAlpha->height) ||
                 (decoder->image->depth != srcAlpha->depth)) {
+                avifDiagnosticsPrintf(&decoder->diag, "decoder->image does not match srcAlpha in width, height, or bit depth");
                 return AVIF_RESULT_DECODE_ALPHA_FAILED;
             }
 
diff --git a/src/reformat.c b/src/reformat.c
index 8d1dd1b..ea19b98 100644
--- a/src/reformat.c
+++ b/src/reformat.c
@@ -442,6 +442,11 @@ static avifResult avifImageYUVAnyToRGBAnySlow(const avifImage * image,
     // These are the only supported built-ins
     assert((chromaUpsampling == AVIF_CHROMA_UPSAMPLING_BILINEAR) || (chromaUpsampling == AVIF_CHROMA_UPSAMPLING_NEAREST));
 
+    // If toRGBAlphaMode is active (not no-op), assert that the alpha plane is present. The end of
+    // the avifPrepareReformatState() function should ensure this, but this assert makes it clear
+    // to clang's analyzer.
+    assert((state->toRGBAlphaMode == AVIF_ALPHA_MULTIPLY_MODE_NO_OP) || aPlane);
+
     for (uint32_t j = 0; j < image->height; ++j) {
         const uint32_t uvJ = j >> state->formatInfo.chromaShiftY;
         const uint8_t * ptrY8 = &yPlane[j * yRowBytes];
diff --git a/src/scale.c b/src/scale.c
new file mode 100644
index 0000000..6ca651b
--- /dev/null
+++ b/src/scale.c
@@ -0,0 +1,150 @@
+// Copyright 2021 Joe Drago. All rights reserved.
+// SPDX-License-Identifier: BSD-2-Clause
+
+#include "avif/internal.h"
+
+#if !defined(AVIF_LIBYUV_ENABLED)
+
+avifBool avifImageScale(avifImage * image, uint32_t dstWidth, uint32_t dstHeight, uint32_t imageSizeLimit, avifDiagnostics * diag)
+{
+    (void)image;
+    (void)dstWidth;
+    (void)dstHeight;
+    (void)imageSizeLimit;
+    avifDiagnosticsPrintf(diag, "avifImageScale() called, but is unimplemented without libyuv!");
+    return AVIF_FALSE;
+}
+
+#else
+
+#include <limits.h>
+
+#if defined(__clang__)
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wstrict-prototypes" // "this function declaration is not a prototype"
+#endif
+#include <libyuv.h>
+#if defined(__clang__)
+#pragma clang diagnostic pop
+#endif
+
+// This should be configurable and/or smarter. kFilterBox has the highest quality but is the slowest.
+#define AVIF_LIBYUV_FILTER_MODE kFilterBox
+
+avifBool avifImageScale(avifImage * image, uint32_t dstWidth, uint32_t dstHeight, uint32_t imageSizeLimit, avifDiagnostics * diag)
+{
+    if ((image->width == dstWidth) && (image->height == dstHeight)) {
+        // Nothing to do
+        return AVIF_TRUE;
+    }
+
+    if ((dstWidth == 0) || (dstHeight == 0)) {
+        avifDiagnosticsPrintf(diag, "avifImageScale requested invalid dst dimensions [%ux%u]", dstWidth, dstHeight);
+        return AVIF_FALSE;
+    }
+    if (dstWidth > (imageSizeLimit / dstHeight)) {
+        avifDiagnosticsPrintf(diag, "avifImageScale requested dst dimensions that are too large [%ux%u]", dstWidth, dstHeight);
+        return AVIF_FALSE;
+    }
+
+    uint8_t * srcYUVPlanes[AVIF_PLANE_COUNT_YUV];
+    uint32_t srcYUVRowBytes[AVIF_PLANE_COUNT_YUV];
+    for (int i = 0; i < AVIF_PLANE_COUNT_YUV; ++i) {
+        srcYUVPlanes[i] = image->yuvPlanes[i];
+        image->yuvPlanes[i] = NULL;
+        srcYUVRowBytes[i] = image->yuvRowBytes[i];
+        image->yuvRowBytes[i] = 0;
+    }
+    const avifBool srcImageOwnsYUVPlanes = image->imageOwnsYUVPlanes;
+    image->imageOwnsYUVPlanes = AVIF_FALSE;
+
+    uint8_t * srcAlphaPlane = image->alphaPlane;
+    image->alphaPlane = NULL;
+    uint32_t srcAlphaRowBytes = image->alphaRowBytes;
+    image->alphaRowBytes = 0;
+    const avifBool srcImageOwnsAlphaPlane = image->imageOwnsAlphaPlane;
+    image->imageOwnsAlphaPlane = AVIF_FALSE;
+
+    const uint32_t srcWidth = image->width;
+    image->width = dstWidth;
+    const uint32_t srcHeight = image->height;
+    image->height = dstHeight;
+
+    if (srcYUVPlanes[0] || srcAlphaPlane) {
+        // A simple conservative check to avoid integer overflows in libyuv's ScalePlane() and
+        // ScalePlane_12() functions.
+        if (srcWidth > 16384) {
+            avifDiagnosticsPrintf(diag, "avifImageScale requested invalid width scale for libyuv [%u -> %u]", srcWidth, dstWidth);
+            return AVIF_FALSE;
+        }
+        if (srcHeight > 16384) {
+            avifDiagnosticsPrintf(diag, "avifImageScale requested invalid height scale for libyuv [%u -> %u]", srcHeight, dstHeight);
+            return AVIF_FALSE;
+        }
+    }
+
+    if (srcYUVPlanes[0]) {
+        avifImageAllocatePlanes(image, AVIF_PLANES_YUV);
+
+        avifPixelFormatInfo formatInfo;
+        avifGetPixelFormatInfo(image->yuvFormat, &formatInfo);
+        const uint32_t srcUVWidth = (srcWidth + formatInfo.chromaShiftX) >> formatInfo.chromaShiftX;
+        const uint32_t srcUVHeight = (srcHeight + formatInfo.chromaShiftY) >> formatInfo.chromaShiftY;
+        const uint32_t dstUVWidth = (dstWidth + formatInfo.chromaShiftX) >> formatInfo.chromaShiftX;
+        const uint32_t dstUVHeight = (dstHeight + formatInfo.chromaShiftY) >> formatInfo.chromaShiftY;
+
+        for (int i = 0; i < AVIF_PLANE_COUNT_YUV; ++i) {
+            if (!srcYUVPlanes[i]) {
+                continue;
+            }
+
+            const uint32_t srcW = (i == AVIF_CHAN_Y) ? srcWidth : srcUVWidth;
+            const uint32_t srcH = (i == AVIF_CHAN_Y) ? srcHeight : srcUVHeight;
+            const uint32_t dstW = (i == AVIF_CHAN_Y) ? dstWidth : dstUVWidth;
+            const uint32_t dstH = (i == AVIF_CHAN_Y) ? dstHeight : dstUVHeight;
+            if (image->depth > 8) {
+                uint16_t * const srcPlane = (uint16_t *)srcYUVPlanes[i];
+                const uint32_t srcStride = srcYUVRowBytes[i] / 2;
+                uint16_t * const dstPlane = (uint16_t *)image->yuvPlanes[i];
+                const uint32_t dstStride = image->yuvRowBytes[i] / 2;
+                ScalePlane_12(srcPlane, srcStride, srcW, srcH, dstPlane, dstStride, dstW, dstH, AVIF_LIBYUV_FILTER_MODE);
+            } else {
+                uint8_t * const srcPlane = srcYUVPlanes[i];
+                const uint32_t srcStride = srcYUVRowBytes[i];
+                uint8_t * const dstPlane = image->yuvPlanes[i];
+                const uint32_t dstStride = image->yuvRowBytes[i];
+                ScalePlane(srcPlane, srcStride, srcW, srcH, dstPlane, dstStride, dstW, dstH, AVIF_LIBYUV_FILTER_MODE);
+            }
+
+            if (srcImageOwnsYUVPlanes) {
+                avifFree(srcYUVPlanes[i]);
+            }
+        }
+    }
+
+    if (srcAlphaPlane) {
+        avifImageAllocatePlanes(image, AVIF_PLANES_A);
+
+        if (image->depth > 8) {
+            uint16_t * const srcPlane = (uint16_t *)srcAlphaPlane;
+            const uint32_t srcStride = srcAlphaRowBytes / 2;
+            uint16_t * const dstPlane = (uint16_t *)image->alphaPlane;
+            const uint32_t dstStride = image->alphaRowBytes / 2;
+            ScalePlane_12(srcPlane, srcStride, srcWidth, srcHeight, dstPlane, dstStride, dstWidth, dstHeight, AVIF_LIBYUV_FILTER_MODE);
+        } else {
+            uint8_t * const srcPlane = srcAlphaPlane;
+            const uint32_t srcStride = srcAlphaRowBytes;
+            uint8_t * const dstPlane = image->alphaPlane;
+            const uint32_t dstStride = image->alphaRowBytes;
+            ScalePlane(srcPlane, srcStride, srcWidth, srcHeight, dstPlane, dstStride, dstWidth, dstHeight, AVIF_LIBYUV_FILTER_MODE);
+        }
+
+        if (srcImageOwnsAlphaPlane) {
+            avifFree(srcAlphaPlane);
+        }
+    }
+
+    return AVIF_TRUE;
+}
+
+#endif
diff --git a/src/write.c b/src/write.c
index 6a2474a..eff6d2c 100644
--- a/src/write.c
+++ b/src/write.c
@@ -180,6 +180,79 @@ static void avifEncoderItemAddMdatFixup(avifEncoderItem * item, const avifRWStre
     fixup->offset = avifRWStreamOffset(s);
 }
 
+// ---------------------------------------------------------------------------
+// avifItemPropertyDedup - Provides ipco deduplication
+
+typedef struct avifItemProperty
+{
+    uint8_t index;
+    size_t offset;
+    size_t size;
+} avifItemProperty;
+AVIF_ARRAY_DECLARE(avifItemPropertyArray, avifItemProperty, property);
+
+typedef struct avifItemPropertyDedup
+{
+    avifItemPropertyArray properties;
+    avifRWStream s;    // Temporary stream for each new property, checked against already-written boxes for deduplications
+    avifRWData buffer; // Temporary storage for 's'
+    uint8_t nextIndex; // 1-indexed, incremented every time another unique property is finished
+} avifItemPropertyDedup;
+
+static avifItemPropertyDedup * avifItemPropertyDedupCreate(void)
+{
+    avifItemPropertyDedup * dedup = (avifItemPropertyDedup *)avifAlloc(sizeof(avifItemPropertyDedup));
+    memset(dedup, 0, sizeof(avifItemPropertyDedup));
+    avifArrayCreate(&dedup->properties, sizeof(avifItemProperty), 8);
+    avifRWDataRealloc(&dedup->buffer, 2048); // This will resize automatically (if necessary)
+    return dedup;
+}
+
+static void avifItemPropertyDedupDestroy(avifItemPropertyDedup * dedup)
+{
+    avifArrayDestroy(&dedup->properties);
+    avifRWDataFree(&dedup->buffer);
+    avifFree(dedup);
+}
+
+// Resets the dedup's temporary write stream in preparation for a single item property's worth of writing
+static void avifItemPropertyDedupStart(avifItemPropertyDedup * dedup)
+{
+    avifRWStreamStart(&dedup->s, &dedup->buffer);
+}
+
+// This compares the newly written item property (in the dedup's temporary storage buffer) to
+// already-written properties (whose offsets/sizes in outputStream are recorded in the dedup). If a
+// match is found, the previous item's index is used. If this new property is unique, it is
+// assigned the next available property index, written to the output stream, and its offset/size in
+// the output stream is recorded in the dedup for future comparisons.
+//
+// This function always returns a valid 1-indexed property index for usage in a property association
+// (ipma) box later. If the most recent property was a duplicate of a previous property, the return
+// value will be the index of the original property, otherwise it will be the index of the newly
+// created property.
+static uint8_t avifItemPropertyDedupFinish(avifItemPropertyDedup * dedup, avifRWStream * outputStream)
+{
+    const size_t newPropertySize = avifRWStreamOffset(&dedup->s);
+
+    for (size_t i = 0; i < dedup->properties.count; ++i) {
+        avifItemProperty * property = &dedup->properties.property[i];
+        if ((property->size == newPropertySize) &&
+            !memcmp(&outputStream->raw->data[property->offset], dedup->buffer.data, newPropertySize)) {
+            // We've already written this exact property, reuse it
+            return property->index;
+        }
+    }
+
+    // Write a new property, and remember its location in the output stream for future deduplication
+    avifItemProperty * property = (avifItemProperty *)avifArrayPushPtr(&dedup->properties);
+    property->index = ++dedup->nextIndex; // preincrement so the first new index is 1 (as ipma is 1-indexed)
+    property->size = newPropertySize;
+    property->offset = avifRWStreamOffset(outputStream);
+    avifRWStreamWrite(outputStream, dedup->buffer.data, newPropertySize);
+    return property->index;
+}
+
 // ---------------------------------------------------------------------------
 
 avifEncoder * avifEncoderCreate(void)
@@ -213,20 +286,48 @@ void avifEncoderSetCodecSpecificOption(avifEncoder * encoder, const char * key,
     avifCodecSpecificOptionsSet(encoder->csOptions, key, value);
 }
 
-static void avifEncoderWriteColorProperties(avifRWStream * s, const avifImage * imageMetadata, struct ipmaArray * ipma, uint8_t * itemPropertyIndex)
+// This function is used in two codepaths:
+// * writing color *item* properties
+// * writing color *track* properties
+//
+// Item properties must have property associations with them and can be deduplicated (by reusing
+// these associations), so this function leverages the ipma and dedup arguments to do this.
+//
+// Track properties, however, are implicitly associated by the track in which they are contained, so
+// there is no need to build a property association box (ipma), and no way to deduplicate/reuse a
+// property. In this case, the ipma and dedup properties should/will be set to NULL, and this
+// function will avoid using them.
+static void avifEncoderWriteColorProperties(avifRWStream * outputStream,
+                                            const avifImage * imageMetadata,
+                                            struct ipmaArray * ipma,
+                                            avifItemPropertyDedup * dedup)
 {
+    avifRWStream * s = outputStream;
+    if (dedup) {
+        assert(ipma);
+
+        // Use the dedup's temporary stream for box writes
+        s = &dedup->s;
+    }
+
     if (imageMetadata->icc.size > 0) {
+        if (dedup) {
+            avifItemPropertyDedupStart(dedup);
+        }
         avifBoxMarker colr = avifRWStreamWriteBox(s, "colr", AVIF_BOX_SIZE_TBD);
         avifRWStreamWriteChars(s, "prof", 4); // unsigned int(32) colour_type;
         avifRWStreamWrite(s, imageMetadata->icc.data, imageMetadata->icc.size);
         avifRWStreamFinishBox(s, colr);
-        if (ipma && itemPropertyIndex) {
-            ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_FALSE);
+        if (dedup) {
+            ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_FALSE);
         }
     }
 
     // HEIF 6.5.5.1, from Amendment 3 allows multiple colr boxes: "at most one for a given value of colour type"
     // Therefore, *always* writing an nclx box, even if an a prof box was already written above.
+    if (dedup) {
+        avifItemPropertyDedupStart(dedup);
+    }
     avifBoxMarker colr = avifRWStreamWriteBox(s, "colr", AVIF_BOX_SIZE_TBD);
     avifRWStreamWriteChars(s, "nclx", 4);                                            // unsigned int(32) colour_type;
     avifRWStreamWriteU16(s, imageMetadata->colorPrimaries);                          // unsigned int(16) colour_primaries;
@@ -235,21 +336,27 @@ static void avifEncoderWriteColorProperties(avifRWStream * s, const avifImage *
     avifRWStreamWriteU8(s, (imageMetadata->yuvRange == AVIF_RANGE_FULL) ? 0x80 : 0); // unsigned int(1) full_range_flag;
                                                                                      // unsigned int(7) reserved = 0;
     avifRWStreamFinishBox(s, colr);
-    if (ipma && itemPropertyIndex) {
-        ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_FALSE);
+    if (dedup) {
+        ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_FALSE);
     }
 
     // Write (Optional) Transformations
     if (imageMetadata->transformFlags & AVIF_TRANSFORM_PASP) {
+        if (dedup) {
+            avifItemPropertyDedupStart(dedup);
+        }
         avifBoxMarker pasp = avifRWStreamWriteBox(s, "pasp", AVIF_BOX_SIZE_TBD);
         avifRWStreamWriteU32(s, imageMetadata->pasp.hSpacing); // unsigned int(32) hSpacing;
         avifRWStreamWriteU32(s, imageMetadata->pasp.vSpacing); // unsigned int(32) vSpacing;
         avifRWStreamFinishBox(s, pasp);
-        if (ipma && itemPropertyIndex) {
-            ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_FALSE);
+        if (dedup) {
+            ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_FALSE);
         }
     }
     if (imageMetadata->transformFlags & AVIF_TRANSFORM_CLAP) {
+        if (dedup) {
+            avifItemPropertyDedupStart(dedup);
+        }
         avifBoxMarker clap = avifRWStreamWriteBox(s, "clap", AVIF_BOX_SIZE_TBD);
         avifRWStreamWriteU32(s, imageMetadata->clap.widthN);    // unsigned int(32) cleanApertureWidthN;
         avifRWStreamWriteU32(s, imageMetadata->clap.widthD);    // unsigned int(32) cleanApertureWidthD;
@@ -260,26 +367,32 @@ static void avifEncoderWriteColorProperties(avifRWStream * s, const avifImage *
         avifRWStreamWriteU32(s, imageMetadata->clap.vertOffN);  // unsigned int(32) vertOffN;
         avifRWStreamWriteU32(s, imageMetadata->clap.vertOffD);  // unsigned int(32) vertOffD;
         avifRWStreamFinishBox(s, clap);
-        if (ipma && itemPropertyIndex) {
-            ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_TRUE);
+        if (dedup) {
+            ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_TRUE);
         }
     }
     if (imageMetadata->transformFlags & AVIF_TRANSFORM_IROT) {
+        if (dedup) {
+            avifItemPropertyDedupStart(dedup);
+        }
         avifBoxMarker irot = avifRWStreamWriteBox(s, "irot", AVIF_BOX_SIZE_TBD);
         uint8_t angle = imageMetadata->irot.angle & 0x3;
         avifRWStreamWrite(s, &angle, 1); // unsigned int (6) reserved = 0; unsigned int (2) angle;
         avifRWStreamFinishBox(s, irot);
-        if (ipma && itemPropertyIndex) {
-            ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_TRUE);
+        if (dedup) {
+            ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_TRUE);
         }
     }
     if (imageMetadata->transformFlags & AVIF_TRANSFORM_IMIR) {
+        if (dedup) {
+            avifItemPropertyDedupStart(dedup);
+        }
         avifBoxMarker imir = avifRWStreamWriteBox(s, "imir", AVIF_BOX_SIZE_TBD);
         uint8_t mode = imageMetadata->imir.mode & 0x1;
         avifRWStreamWrite(s, &mode, 1); // unsigned int (7) reserved = 0; unsigned int (1) mode;
         avifRWStreamFinishBox(s, imir);
-        if (ipma && itemPropertyIndex) {
-            ipmaPush(ipma, ++(*itemPropertyIndex), AVIF_TRUE);
+        if (dedup) {
+            ipmaPush(ipma, avifItemPropertyDedupFinish(dedup, outputStream), AVIF_TRUE);
         }
     }
 }
@@ -900,7 +1013,7 @@ avifResult avifEncoderFinish(avifEncoder * encoder, avifRWData * output)
 
     avifBoxMarker iprp = avifRWStreamWriteBox(&s, "iprp", AVIF_BOX_SIZE_TBD);
 
-    uint8_t itemPropertyIndex = 0;
+    avifItemPropertyDedup * dedup = avifItemPropertyDedupCreate();
     avifBoxMarker ipco = avifRWStreamWriteBox(&s, "ipco", AVIF_BOX_SIZE_TBD);
     for (uint32_t itemIndex = 0; itemIndex < encoder->data->items.count; ++itemIndex) {
         avifEncoderItem * item = &encoder->data->items.item[itemIndex];
@@ -940,40 +1053,46 @@ avifResult avifEncoderFinish(avifEncoder * encoder, avifRWData * output)
 
         // Properties all av01 items need
 
-        avifBoxMarker ispe = avifRWStreamWriteFullBox(&s, "ispe", AVIF_BOX_SIZE_TBD, 0, 0);
-        avifRWStreamWriteU32(&s, imageWidth);  // unsigned int(32) image_width;
-        avifRWStreamWriteU32(&s, imageHeight); // unsigned int(32) image_height;
-        avifRWStreamFinishBox(&s, ispe);
-        ipmaPush(&item->ipma, ++itemPropertyIndex, AVIF_FALSE); // ipma is 1-indexed, doing this afterwards is correct
+        avifItemPropertyDedupStart(dedup);
+        avifBoxMarker ispe = avifRWStreamWriteFullBox(&dedup->s, "ispe", AVIF_BOX_SIZE_TBD, 0, 0);
+        avifRWStreamWriteU32(&dedup->s, imageWidth);  // unsigned int(32) image_width;
+        avifRWStreamWriteU32(&dedup->s, imageHeight); // unsigned int(32) image_height;
+        avifRWStreamFinishBox(&dedup->s, ispe);
+        ipmaPush(&item->ipma, avifItemPropertyDedupFinish(dedup, &s), AVIF_FALSE);
 
+        avifItemPropertyDedupStart(dedup);
         uint8_t channelCount = (item->alpha || (imageMetadata->yuvFormat == AVIF_PIXEL_FORMAT_YUV400)) ? 1 : 3;
-        avifBoxMarker pixi = avifRWStreamWriteFullBox(&s, "pixi", AVIF_BOX_SIZE_TBD, 0, 0);
-        avifRWStreamWriteU8(&s, channelCount); // unsigned int (8) num_channels;
+        avifBoxMarker pixi = avifRWStreamWriteFullBox(&dedup->s, "pixi", AVIF_BOX_SIZE_TBD, 0, 0);
+        avifRWStreamWriteU8(&dedup->s, channelCount); // unsigned int (8) num_channels;
         for (uint8_t chan = 0; chan < channelCount; ++chan) {
-            avifRWStreamWriteU8(&s, (uint8_t)imageMetadata->depth); // unsigned int (8) bits_per_channel;
+            avifRWStreamWriteU8(&dedup->s, (uint8_t)imageMetadata->depth); // unsigned int (8) bits_per_channel;
         }
-        avifRWStreamFinishBox(&s, pixi);
-        ipmaPush(&item->ipma, ++itemPropertyIndex, AVIF_FALSE);
+        avifRWStreamFinishBox(&dedup->s, pixi);
+        ipmaPush(&item->ipma, avifItemPropertyDedupFinish(dedup, &s), AVIF_FALSE);
 
         if (item->codec) {
-            writeConfigBox(&s, &item->av1C);
-            ipmaPush(&item->ipma, ++itemPropertyIndex, AVIF_TRUE);
+            avifItemPropertyDedupStart(dedup);
+            writeConfigBox(&dedup->s, &item->av1C);
+            ipmaPush(&item->ipma, avifItemPropertyDedupFinish(dedup, &s), AVIF_TRUE);
         }
 
         if (item->alpha) {
             // Alpha specific properties
 
-            avifBoxMarker auxC = avifRWStreamWriteFullBox(&s, "auxC", AVIF_BOX_SIZE_TBD, 0, 0);
-            avifRWStreamWriteChars(&s, alphaURN, alphaURNSize); //  string aux_type;
-            avifRWStreamFinishBox(&s, auxC);
-            ipmaPush(&item->ipma, ++itemPropertyIndex, AVIF_FALSE);
+            avifItemPropertyDedupStart(dedup);
+            avifBoxMarker auxC = avifRWStreamWriteFullBox(&dedup->s, "auxC", AVIF_BOX_SIZE_TBD, 0, 0);
+            avifRWStreamWriteChars(&dedup->s, alphaURN, alphaURNSize); //  string aux_type;
+            avifRWStreamFinishBox(&dedup->s, auxC);
+            ipmaPush(&item->ipma, avifItemPropertyDedupFinish(dedup, &s), AVIF_FALSE);
         } else {
             // Color specific properties
 
-            avifEncoderWriteColorProperties(&s, imageMetadata, &item->ipma, &itemPropertyIndex);
+            avifEncoderWriteColorProperties(&s, imageMetadata, &item->ipma, dedup);
         }
     }
     avifRWStreamFinishBox(&s, ipco);
+    avifItemPropertyDedupDestroy(dedup);
+    dedup = NULL;
 
     avifBoxMarker ipma = avifRWStreamWriteFullBox(&s, "ipma", AVIF_BOX_SIZE_TBD, 0, 0);
     {
diff --git a/tests/data/tests.json b/tests/data/tests.json
index a34b80d..83a090d 100644
--- a/tests/data/tests.json
+++ b/tests/data/tests.json
@@ -1,1298 +1,1298 @@
 [
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp0_0_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp0_0_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp4_40_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp4_40_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp24_60_speed-1",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp24_60_speed10",
-    "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.37820690870285034
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_aom_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed-1",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed10",
-    "input": "kodim03_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 3,
-    "avg": 0.27903494238853455
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_aom_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "aom",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 0,
-    "avg": 0.25
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "aom",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "dav1d",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 0,
-    "maxQP": 0,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 4,
-    "maxQP": 40,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed-1",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": -1,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  },
-  {
-    "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed10",
-    "input": "kodim23_yuv420_8bpc.y4m",
-    "enc": "rav1e",
-    "dec": "libgav1",
-    "minQP": 24,
-    "maxQP": 60,
-    "speed": 10,
-    "active": true,
-    "max": 2,
-    "avg": 0.27959379553794861
-  }
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 77,
+        "avg": 2.4528341293334961
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_aom_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 79,
+        "avg": 3.5470981597900391
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 77,
+        "avg": 2.4528341293334961
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_dav1d_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 79,
+        "avg": 3.5470981597900391
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 77,
+        "avg": 2.4528341293334961
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 79,
+        "avg": 3.7078702449798584
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_aom_to_libgav1_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 79,
+        "avg": 3.5470981597900391
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.37825256586074829
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 4,
+        "avg": 0.45401030778884888
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 157,
+        "avg": 5.5284576416015625
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 141,
+        "avg": 5.6126852035522461
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 420,
+        "avg": 10.41141414642334
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_aom_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 306,
+        "avg": 10.473468780517578
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.37825256586074829
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 4,
+        "avg": 0.45401030778884888
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 157,
+        "avg": 5.5284576416015625
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 141,
+        "avg": 5.6126852035522461
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 420,
+        "avg": 10.41141414642334
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_dav1d_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 306,
+        "avg": 10.473468780517578
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp0_0_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.37825256586074829
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp0_0_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 4,
+        "avg": 0.45401030778884888
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp4_40_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 157,
+        "avg": 5.5284576416015625
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp4_40_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 141,
+        "avg": 5.6126852035522461
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp24_60_speed-1",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 420,
+        "avg": 10.41141414642334
+    },
+    {
+        "name": "cosmos1650_yuv444_10bpc_p3pq_rav1e_to_libgav1_qp24_60_speed10",
+        "input": "cosmos1650_yuv444_10bpc_p3pq.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 306,
+        "avg": 10.473468780517578
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 38,
+        "avg": 0.57263118028640747
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_aom_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 49,
+        "avg": 0.75880557298660278
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 38,
+        "avg": 0.57263118028640747
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_dav1d_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 49,
+        "avg": 0.75880557298660278
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 38,
+        "avg": 0.57263118028640747
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.74706143140792847
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_aom_to_libgav1_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 49,
+        "avg": 0.75880557298660278
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.27893766760826111
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.316540390253067
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 49,
+        "avg": 1.02064323425293
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 62,
+        "avg": 1.0844612121582031
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 111,
+        "avg": 1.82947039604187
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_aom_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 119,
+        "avg": 1.9037811756134033
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.27893766760826111
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.316540390253067
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 49,
+        "avg": 1.02064323425293
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 62,
+        "avg": 1.0844612121582031
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 111,
+        "avg": 1.82947039604187
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 119,
+        "avg": 1.9037811756134033
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 3,
+        "avg": 0.27893766760826111
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.316540390253067
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 49,
+        "avg": 1.02064323425293
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 62,
+        "avg": 1.0844612121582031
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed-1",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 111,
+        "avg": 1.82947039604187
+    },
+    {
+        "name": "kodim03_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed10",
+        "input": "kodim03_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 119,
+        "avg": 1.9037811756134033
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 9,
+        "avg": 0.61920797824859619
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_aom_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 37,
+        "avg": 0.793113112449646
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 9,
+        "avg": 0.61920797824859619
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_dav1d_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 37,
+        "avg": 0.793113112449646
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 0,
+        "avg": 0.25
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 9,
+        "avg": 0.61920797824859619
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 18,
+        "avg": 0.78176945447921753
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_aom_to_libgav1_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "aom",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 37,
+        "avg": 0.793113112449646
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 2,
+        "avg": 0.279501587152481
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.31784248352050781
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 42,
+        "avg": 0.99207621812820435
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 89,
+        "avg": 1.0305366516113281
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 113,
+        "avg": 1.7271842956542969
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_aom_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "aom",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 113,
+        "avg": 1.7759075164794922
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 2,
+        "avg": 0.279501587152481
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.31784248352050781
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 42,
+        "avg": 0.99207621812820435
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 89,
+        "avg": 1.0305366516113281
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 113,
+        "avg": 1.7271842956542969
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_dav1d_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "dav1d",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 113,
+        "avg": 1.7759075164794922
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": -1,
+        "active": true,
+        "max": 2,
+        "avg": 0.279501587152481
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp0_0_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 0,
+        "maxQP": 0,
+        "speed": 10,
+        "active": true,
+        "max": 3,
+        "avg": 0.31784248352050781
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": -1,
+        "active": true,
+        "max": 42,
+        "avg": 0.99207621812820435
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp4_40_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 4,
+        "maxQP": 40,
+        "speed": 10,
+        "active": true,
+        "max": 89,
+        "avg": 1.0305366516113281
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed-1",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": -1,
+        "active": true,
+        "max": 113,
+        "avg": 1.7271842956542969
+    },
+    {
+        "name": "kodim23_yuv420_8bpc_rav1e_to_libgav1_qp24_60_speed10",
+        "input": "kodim23_yuv420_8bpc.y4m",
+        "enc": "rav1e",
+        "dec": "libgav1",
+        "minQP": 24,
+        "maxQP": 60,
+        "speed": 10,
+        "active": true,
+        "max": 113,
+        "avg": 1.7759075164794922
+    }
 ]
\ No newline at end of file
diff --git a/tests/docker/build.sh b/tests/docker/build.sh
index 4591769..442640a 100644
--- a/tests/docker/build.sh
+++ b/tests/docker/build.sh
@@ -34,7 +34,7 @@ nasm --version
 
 # aom
 cd
-git clone -b v3.1.1 --depth 1 https://aomedia.googlesource.com/aom
+git clone -b v3.2.0 --depth 1 https://aomedia.googlesource.com/aom
 cd aom
 mkdir build.avif
 cd build.avif
@@ -43,7 +43,7 @@ ninja install
 
 # dav1d
 cd
-git clone -b 0.9.0 --depth 1 https://code.videolan.org/videolan/dav1d.git
+git clone -b 0.9.2 --depth 1 https://code.videolan.org/videolan/dav1d.git
 cd dav1d
 mkdir build
 cd build
diff --git a/tests/oss-fuzz/avif_decode_fuzzer.cc b/tests/oss-fuzz/avif_decode_fuzzer.cc
index 4165745..bcab665 100644
--- a/tests/oss-fuzz/avif_decode_fuzzer.cc
+++ b/tests/oss-fuzz/avif_decode_fuzzer.cc
@@ -18,13 +18,26 @@ extern "C" int LLVMFuzzerTestOneInput(const uint8_t * Data, size_t Size)
     static size_t yuvDepthsCount = sizeof(yuvDepths) / sizeof(yuvDepths[0]);
 
     avifDecoder * decoder = avifDecoderCreate();
-    avifResult result = avifDecoderSetIOMemory(decoder, Data, Size);
-    if (result == AVIF_RESULT_OK) {
-        result = avifDecoderParse(decoder);
-    }
+    decoder->allowProgressive = AVIF_TRUE;
+    // ClusterFuzz passes -rss_limit_mb=2560 to avif_decode_fuzzer. Empirically setting
+    // decoder->imageSizeLimit to this value allows avif_decode_fuzzer to consume no more than
+    // 2560 MB of memory.
+    static_assert(11 * 1024 * 10 * 1024 <= AVIF_DEFAULT_IMAGE_SIZE_LIMIT, "");
+    decoder->imageSizeLimit = 11 * 1024 * 10 * 1024;
+    avifIO * io = avifIOCreateMemoryReader(Data, Size);
+    // Simulate Chrome's avifIO object, which is not persistent.
+    io->persistent = AVIF_FALSE;
+    avifDecoderSetIO(decoder, io);
+    avifResult result = avifDecoderParse(decoder);
     if (result == AVIF_RESULT_OK) {
         for (int loop = 0; loop < 2; ++loop) {
             while (avifDecoderNextImage(decoder) == AVIF_RESULT_OK) {
+                if ((loop != 0) || (decoder->imageIndex != 0)) {
+                    // Skip the YUV<->RGB conversion tests, which are time-consuming for large
+                    // images. It suffices to run these tests only for loop == 0 and only for the
+                    // first image of an image sequence.
+                    continue;
+                }
                 avifRGBImage rgb;
                 avifRGBImageSetDefaults(&rgb, decoder->image);
 
@@ -37,7 +50,9 @@ extern "C" int LLVMFuzzerTestOneInput(const uint8_t * Data, size_t Size)
                             rgb.chromaUpsampling = upsamplings[upsamplingsIndex];
                             avifRGBImageAllocatePixels(&rgb);
                             avifResult rgbResult = avifImageYUVToRGB(decoder->image, &rgb);
-                            if (rgbResult == AVIF_RESULT_OK) {
+                            // Since avifImageRGBToYUV() ignores rgb.chromaUpsampling, we only need
+                            // to test avifImageRGBToYUV() with a single upsamplingsIndex.
+                            if ((rgbResult == AVIF_RESULT_OK) && (upsamplingsIndex == 0)) {
                                 for (size_t yuvDepthsIndex = 0; yuvDepthsIndex < yuvDepthsCount; ++yuvDepthsIndex) {
                                     // ... and back to YUV
                                     avifImage * tempImage = avifImageCreate(decoder->image->width,
diff --git a/tests/testcase.c b/tests/testcase.c
index 01e9da0..5d16a60 100644
--- a/tests/testcase.c
+++ b/tests/testcase.c
@@ -220,6 +220,9 @@ int testCaseRun(TestCase * tc, const char * dataDir, avifBool generating)
 
     encoder = avifEncoderCreate();
     encoder->codecChoice = tc->encodeChoice;
+    encoder->minQuantizer = tc->minQuantizer;
+    encoder->maxQuantizer = tc->maxQuantizer;
+    encoder->speed = tc->speed;
     encoder->maxThreads = 4; // TODO: pick something better here
     if (avifEncoderWrite(encoder, image, &encodedData) != AVIF_RESULT_OK) {
         printf("ERROR[%s]: Encode failed\n", tc->name);



More information about the Neon-commits mailing list