Change to tarball generation?

Albert Astals Cid aacid at
Wed May 23 16:58:52 UTC 2012

El Dimarts, 22 de maig de 2012, a les 23:48:25, Michael Pyne va escriure:
> Hi all,
> I noticed something while we were on the topic of tagging the beta tomorrow
> that I wanted to bring up, which is a concern with tarball generation.
> Specifically, the various parallizeable tarball generators (pixz, pbzip2)
> seem to generate extraneous data.
> tar is smart enough to ignore this extra data, but this can affect
> decompressing our tarballs in a pipeline (i.e. xz --decompress
> | tar xf -), as tar closing its STDIN causes xz to
> write its excess data to a broken pipe.
> This probably doesn't annoy a ton of different people (except for the
> obvious problem with source-based distros like Gentoo, e.g.
> but if the speedup is not
> very substantial it would be better to use xz or bzip2 to avoid the problem
> entirely.
> (This is done by adjusting the value of "compressors" in the pack release
> script in case you're wondering).

The machine i'm generating the tarballs doesn't have pixz so i'll use xz.


> It might be possible to still get some concurrency benefit by batching up
> modules to "pack" and then running 4 or 8 (or however many CPUs are around)
> separate pack scripts at once, or fire off a pack while starting on tagging
> the next module, etc.
> Thoughts? I'll be very clear that I don't think this should affect creating
> the beta tarballs at all but if we choose to avoid the parallelizing
> compressors hopefully that would be in time for the release candidates.
> Regards,
>  - Michael Pyne

More information about the release-team mailing list