Turning Diffusion-Based Image Colorization Into Efficient Color Compression

Turning Diffusion-Based Image Colorization Into Efficient Color Compression

 Abstract:

The work of Levin et al. (2004) popularized stroke-based methods that add color to gray value images according to a small amount of user-specified color samples. Even though such reconstructions from sparse data suggest a possible use in compression, only few attempts were made so far in this direction. Diffusion-based compression methods pursue a similar idea: they store only few image pixels and inpaint the missing regions. Despite this close relation and a lack of diffusion-based color codecs, colorization ideas were so far only integrated into transform-based approaches such as JPEG. We address this missing link with two contributions. First, we show the relation between the discrete colorization of Levin et al. and continuous diffusion-based inpainting in the YCbCr color space. It decomposes the image into a luma (brightness) channel and two chroma (color) channels. Our luma-guided diffusion framework steers the diffusion inpainting in the chroma channels according to the structure in the luma channel. We show that making the luma-guided colorization anisotropic outperforms the method of Levin et al. significantly. Second, we propose a new luma preference codec that invests a large fraction of the bit budget into an accurate representation of the luma channel. This allows a high-quality reconstruction of color data with our colorization technique. Simultaneously, we exploit the fact that the human visual system is more sensitive to structural than to color information. Our experiments demonstrate that our new codec outperforms the state of the art in diffusion-based image compression and is competitive to transform-based codecs.

 


Comments are closed.