# 2020

• M. Hernández-Cabronero, J. Portell, I. Blanes, J. Serra-Sagristà,
"High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation",
MDPI Remote Sensing, vol.12, no.18, pp.2955, 2020.

The capacity of the downlink channel is a major bottleneck for applications based on remote sensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amount of HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraints of spaceborne devices impose limitations on the complexity of practical compression algorithms. To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in this study. This work aims at finding the most advantageous compression–complexity trade-off within the state of the art in HSI compression. To do so, a novel comparison of the most competitive spectral decorrelation approaches combined with the best performing low-complexity compressors of the state is presented. Compression performance and execution time results are obtained for a set of 47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming only a limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields the best trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster and its compressed data rates are on average within 16% of the CCSDS standard. In scenarios where energy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of all evaluated methods.
@article { 2020_hernandez_fapec_decorrelation,
author = {M. Hern{\'a}ndez-Cabronero and J. Portell and I. Blanes and J.
Serra-Sagrist{\a}},
title = {High-Performance Lossless Compression of Hyperspectral Remote Sensing
Scenes Based on Spectral Decorrelation},
journal = {MDPI Remote Sensing},
year = {2020},
pages = {2955},
volume = {12},
number = {18},
doi = {10.3390/rs12182955},
abstract = {The capacity of the downlink channel is a major bottleneck for
applications based on remote sensing hyperspectral imagery (HSI). Data
compression is an essential tool to maximize the amount of HSI scenes that can
be retrieved on the ground. At the same time, energy and hardware constraints of
spaceborne devices impose limitations on the complexity of practical compression
algorithms. To avoid any distortion in the analysis of the HSI data, only
lossless compression is considered in this study. This work aims at finding the
in HSI compression. To do so, a novel comparison of the most competitive
spectral decorrelation approaches combined with the best performing
low-complexity compressors of the state is presented. Compression performance
and execution time results are obtained for a set of 47 HSI scenes produced by
14 different sensors in real remote sensing missions. Assuming only a limited
amount of energy is available, obtained data suggest that the FAPEC algorithm
yields the best trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC
is 5.0 times faster and its compressed data rates are on average within 16% of
the CCSDS standard. In scenarios where energy constraints can be relaxed, CCSDS
123.0-B-2 yields the best average compression results of all evaluated methods.}
}


• K Chow, D. E. O. Tzamarias, M. Hernández-Cabronero, I. Blanes, J. Serra-Sagristà,
"Analysis of Variable-Length Codes for Integer Encoding in Hyperspectral Data Compression with the k2-Raster Compact Data Structure",
MDPI Remote Sensing, vol.12, no.12, pp.1983, 2020.

This paper examines the various variable-length encoders that provide integer encoding to hyperspectral scene data within a k2-raster compact data structure. This compact data structure leads to a compression ratio similar to that produced by some of the classical compression techniques. This compact data structure also provides direct access for query to its data elements without requiring any decompression. The selection of the integer encoder is critical for obtaining a competitive performance considering both the compression ratio and access time. In this research, we show experimental results of different integer encoders such as Rice, Simple9, Simple16, PForDelta codes, and DACs. Further, a method to determine an appropriate k value for building a k2-raster compact data structure with competitive performance is discussed.
@article { 2020_chow_analysis,
author = {K Chow and D. E. O. Tzamarias and M. Hern{\'a}ndez-Cabronero and I.
Blanes and J. Serra-Sagrist{\a}},
title = {Analysis of Variable-Length Codes for Integer Encoding in
Hyperspectral Data Compression with the k2-Raster Compact Data Structure},
journal = {MDPI Remote Sensing},
year = {2020},
pages = {1983},
volume = {12},
number = {12},
doi = {10.3390/rs12121983},
abstract = {This paper examines the various variable-length encoders that
provide integer encoding to hyperspectral scene data within a k2-raster compact
data structure. This compact data structure leads to a compression ratio similar
to that produced by some of the classical compression techniques. This compact
data structure also provides direct access for query to its data elements
without requiring any decompression. The selection of the integer encoder is
critical for obtaining a competitive performance considering both the
compression ratio and access time. In this research, we show experimental
results of different integer encoders such as Rice, Simple9, Simple16, PForDelta
codes, and DACs. Further, a method to determine an appropriate k value for
building a k2-raster compact data structure with competitive performance is
discussed.}
}


• Manuel Martínez Torres, Miguel Hernández-Cabronero, Ian Blanes, Joan Serra-Sagristà,
"High-throughput variable-to-fixed entropy codec using selective, stochastic code forests",
IEEE Access, vol.8, no.1, pp.81283-81297, 2020.

Efficient high-throughput (HT) compression algorithms are paramount to meet the stringent constraints of present and upcoming data storage, processing, and transmission systems. In particular, latency, bandwidth and energy requirements are critical for those systems. Most HT codecs are designed to maximize compression speed, and secondarily to minimize compressed lengths. On the other hand, decompression speed is often equally or more critical than compression speed, especially in scenarios where decompression is performed multiple times and/or at critical parts of a system. In this work, an algorithm to design variable-to-fixed (VF) codes is proposed that prioritizes decompression speed. Stationary Markov analysis is employed to generate multiple, jointly optimized codes (denoted code forests). Their average compression efficiency is on par with the state of the art in VF codes, e.g., within 1% of Yamamoto et al.'s algorithm. The proposed code forest structure enables the implementation of highly efficient codecs, with decompression speeds 3.8 times faster than other state-of-the-art HT entropy codecs with equal or better compression ratios for natural data sources. Compared to these HT codecs, the proposed forests yields similar compression efficiency and speeds.
@article { 2020_martinez_high_throughput,
author = {Manuel Mart{\'i}nez Torres and Miguel Hern{\'a}ndez-Cabronero and Ian
Blanes and Joan Serra-Sagrist{\a}},
title = {High-throughput variable-to-fixed entropy codec using selective,
stochastic code forests},
journal = {IEEE Access},
year = {2020},
pages = {81283-81297},
volume = {8},
number = {1},
doi = {10.1109/ACCESS.2020.2991314},
abstract = {Efficient high-throughput (HT) compression algorithms are paramount
to meet the stringent constraints of present and upcoming data storage,
processing, and transmission systems. In particular, latency, bandwidth and
energy requirements are critical for those systems. Most HT codecs are designed
to maximize compression speed, and secondarily to minimize compressed lengths.
On the other hand, decompression speed is often equally or more critical than
compression speed, especially in scenarios where decompression is performed
multiple times and/or at critical parts of a system. In this work, an algorithm
to design variable-to-fixed (VF) codes is proposed that prioritizes
decompression speed. Stationary Markov analysis is employed to generate
multiple, jointly optimized codes (denoted code forests). Their average
compression efficiency is on par with the state of the art in VF codes, e.g.,
within 1% of Yamamoto et al.'s algorithm. The proposed code forest structure
enables the implementation of highly efficient codecs, with decompression speeds
3.8 times faster than other state-of-the-art HT entropy codecs with equal or
better compression ratios for natural data sources. Compared to these HT codecs,
the proposed forests yields similar compression efficiency and speeds.}
}



# 2019

• I. Blanes, M. Hernández-Cabronero, J. Serra-Sagristà, M.W. Marcellin,
"Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown Probabilities",
IEEE Access, vol.7, pp.115857-115870, 2019.

In this paper, we provide a method to obtain tight lower bounds on the minimum redundancy achievable by a Huffman code when the probability distribution underlying an alphabet is only partially known. In particular, we address the case where the occurrence probabilities are unknown for some of the symbols in an alphabet. Bounds can be obtained for alphabets of a given size, for alphabets of up to a given size, and alphabets of arbitrary size. The method operates on a computer algebra system, yielding closed-form numbers for all results. Finally, we show the potential of the proposed method to shed some light on the structure of the minimum redundancy achievable by the Huffman code.
@article { 2019_blanes_lower,
author = {I. Blanes and M. Hern{\'a}ndez-Cabronero and J. Serra-Sagrist{\a}
and M.W. Marcellin},
title = {Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown
Probabilities},
journal = {IEEE Access},
year = {2019},
volume = {7},
pages = {115857-115870},
doi = {10.1109/ACCESS.2019.2932206},
abstract = {In this paper, we provide a method to obtain tight lower bounds on
the minimum redundancy achievable by a Huffman code when the probability
distribution underlying an alphabet is only partially known. In particular, we
address the case where the occurrence probabilities are unknown for some of the
symbols in an alphabet. Bounds can be obtained for alphabets of a given size,
for alphabets of up to a given size, and alphabets of arbitrary size. The method
operates on a computer algebra system, yielding closed-form numbers for all
results. Finally, we show the potential of the proposed method to shed some
light on the structure of the minimum redundancy achievable by the Huffman
code.}
}


• Y. Lin, F. Liu, M. Hernandez-Cabronero, E. Ahanonu, M. Marcellin, A. Bilgin, A. Ashok,
"Perception-Optimized Encoding for Visually Lossy Image Compression",
2019 Data Compression Conference (DCC), pp.592-592, March 2019.

(Full text may be available at the publisher's site)

We propose a compression encoding method to perceptually optimize the image quality based on a novel quality metric, which emulates how the human visual system form opinion of a compressed image. Compared to the existing perceptual-optimized compression methods, which usually aim to minimize the detectability of compression artifacts and are sub-optimal in visually lossless regime, the proposed encoder aims to operate in the visually lossy regime. We implement the proposed encoder within the JPEG 2000 standard, and demonstrate its advantage over both detectability-based and conventional MSE encoders.
@inproceedings { Lin19,
author = {Y. Lin and F. Liu and M. Hernandez-Cabronero and E. Ahanonu and M.
Marcellin and A. Bilgin and A. Ashok},
booktitle = {2019 Data Compression Conference (DCC)},
title = {Perception-Optimized Encoding for Visually Lossy Image Compression},
year = {2019},
pages = {592-592},
doi = {10.1109/DCC.2019.00104},
issn = {1068-0314},
month = {March},
url = {http://dx.doi.org/10.1109/DCC.2019.00104},
abstract = {We propose a compression encoding method to perceptually optimize
the image quality based on a novel quality metric, which emulates how the human
visual system form opinion of a compressed image. Compared to the existing
perceptual-optimized compression methods, which usually aim to minimize the
detectability of compression artifacts and are sub-optimal in visually lossless
regime, the proposed encoder aims to operate in the visually lossy regime. We
implement the proposed encoder within the JPEG 2000 standard, and demonstrate
its advantage over both detectability-based and conventional MSE encoders.}
}


• Joan Bartrina-Rapesta, Joan Serra-Sagristà, M.W. Marcellin, M. Hernández-Cabronero,
"A Novel Rate-Control for Predictive Image Coding with Constant Quality",
IEEE Access, vol.7, pp.103918-103930, 2019.

Predictive image coding systems yield a high-compression performance at low computational complexity, and are therefore popular in standards and prominent coding techniques for both lossless and near-lossless compression. However, few prediction-based coding techniques include rate-control approaches because of the difficulty in properly combining the prediction feedback loop and the quantization. None of the rate-control approaches for predictive image coding are focused on delivering homogeneous quality along the whole image. The main objective of this paper is to define a novel rate-control approach based on prediction that homogenises the quality over the entire image. The extensive experimental results, on medical, remote sensing, and natural images, suggest that our proposal attains more regular image quality and lower peak absolute error than the state-of-the-art approaches based on prediction and also transform-based techniques such as JPEG2000 and CCSDS-122.
@article { 2019_bartrina_ratecontrol,
author = {Joan Bartrina-Rapesta and Joan Serra-Sagrist{\a} and M.W. Marcellin
and M. Hern{\'a}ndez-Cabronero},
title = {A Novel Rate-Control for Predictive Image Coding with Constant
Quality},
journal = {IEEE Access},
volume = {7},
pages = {103918-103930},
doi = {10.1109/ACCESS.2019.2931442},
year = {2019},
abstract = {Predictive image coding systems yield a high-compression
performance at low computational complexity, and are therefore popular in
standards and prominent coding techniques for both lossless and near-lossless
compression. However, few prediction-based coding techniques include
rate-control approaches because of the difficulty in properly combining the
prediction feedback loop and the quantization. None of the rate-control
approaches for predictive image coding are focused on delivering homogeneous
quality along the whole image. The main objective of this paper is to define a
novel rate-control approach based on prediction that homogenises the quality
over the entire image. The extensive experimental results, on medical, remote
sensing, and natural images, suggest that our proposal attains more regular
image quality and lower peak absolute error than the state-of-the-art approaches
based on prediction and also transform-based techniques such as JPEG2000 and
CCSDS-122.}
}


• Miguel Hernández-Cabronero, David Vilaseca, Guillermo Becker, Emilio Tylson, Joan Serra-Sagristà,
"Effect of Lightweight Image Compression on CNN-based Image Segmentation",
Grenoble NewSpace Week (GNSW), pp.31, apr 2019.

@inproceedings { 2019_hernandez_effect,
author = {Miguel Hern{\'a}ndez-Cabronero and David Vilaseca and Guillermo
Becker and Emilio Tylson and Joan Serra-Sagrist{\a}},
title = {Effect of Lightweight Image Compression on CNN-based Image
Segmentation},
booktitle = {Grenoble NewSpace Week (GNSW)},
year = {2019},
month = {apr},
pages = {31},
}


• Ian Blanes, Aaron Kiely, Miguel Hernández Cabronero, Joan Serra-Sagristà,
"Performance Impact of Parameter Tuning on the CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard",
MDPI Remote Sensing, vol.11, no.11, pp.1390, 2019.

(Also available at the publisher's site)

This article studies the performance impact related to different parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression standard. This standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new near-lossless compression capability, as well as other new features. This article studies the coding performance impact of different choices for the principal parameters of the new extensions, in addition to reviewing related parameter choices for existing features. Experimental results include data from 16 different instruments with varying detector types, image dimensions, number of spectral bands, bit depth, level of noise, level of calibration, and other image characteristics. Guidelines are provided on how to adjust the parameters in relation to their coding performance impact.
@article { Blanes19rs,
author = {Ian Blanes and Aaron Kiely and Miguel Hern{\'a}ndez Cabronero and
Joan Serra-Sagrist{\a}},
title = {Performance Impact of Parameter Tuning on the CCSDS-123.0-B-2
Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image
Compression Standard},
journal = {MDPI Remote Sensing},
pages = {1390},
volume = {11},
number = {11},
doi = {10.3390/rs11111390},
year = {2019},
url = {https://www.mdpi.com/2072-4292/11/11/1390},
parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and
Near-Lossless Multispectral and Hyperspectral Image Compression standard. This
standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new
near-lossless compression capability, as well as other new features. This
article studies the coding performance impact of different choices for the
principal parameters of the new extensions, in addition to reviewing related
parameter choices for existing features. Experimental results include data from
16 different instruments with varying detector types, image dimensions, number
of spectral bands, bit depth, level of noise, level of calibration, and other
image characteristics. Guidelines are provided on how to adjust the parameters
in relation to their coding performance impact.}
}


• Miguel Hernández-Cabronero, Victor Sanchez, Ian Blanes, Francesc Aulí-Llinàs, Michael W. Marcellin, Joan Serra-Sagristà,
"Mosaic-Based Color-Transform Optimization for the Lossy and Lossy-to-Lossless compression of Pathology Whole-Slide Images",
IEEE Transactions on Medical Imaging, vol.38, no.10, pp.21-32, 2019.

(Also available at the publisher's site)

The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called Mosaic Optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic Optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform (KLT) in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB) and accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). Additionally, reversible optimized transforms achieve PSNR, HDR-VDP-2 and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB and 0.025, respectively, when compared to the reversible color transform (RCT) in lossy-to-lossless compression regimes.
@article { Hernandez19TMI,
author = {Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and Ian Blanes and
Francesc Aul{\'i}-Llin{\a}s and Michael W. Marcellin and Joan
Serra-Sagrist{\a}},
title = {Mosaic-Based Color-Transform Optimization for
the Lossy and Lossy-to-Lossless compression of Pathology Whole-Slide Images},
journal = {IEEE Transactions on Medical Imaging},
year = {2019},
volume = {38},
number = {10},
pages = {21-32},
url = {https://dx.doi.org/10.1109/TMI.2018.2852685},
abstract = {The use of whole-slide images (WSIs) in pathology entails stringent
storage and transmission requirements because of their huge dimensions.
Therefore, image compression is an essential tool to enable efficient access to
these data. In particular, color transforms are needed to exploit the very high
degree of inter-component correlation and obtain competitive compression
performance. Even though state-of-the-art color transforms remove some
redundancy, they disregard important details of the compression algorithm
applied after the transform. Therefore, their coding performance is not optimal.
We propose an optimization method called Mosaic Optimization for designing
irreversible and reversible color transforms simultaneously optimized for any
given WSI and the subsequent compression algorithm. Mosaic Optimization is
designed to attain reasonable computational complexity and enable continuous
scanner operation. Exhaustive experimental results indicate that, for JPEG 2000
at identical compression ratios, the optimized transforms yield images more
similar to the original than other state-of-the-art transforms. Specifically,
irreversible optimized transforms outperform the Karhunen-Loève Transform (KLT)
in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to
3.8 dB) and accuracy of computer-aided nuclei detection tasks (F1 score up to
0.04 higher). Additionally, reversible optimized transforms achieve PSNR,
HDR-VDP-2 and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB and 0.025,
respectively, when compared to the reversible color transform (RCT) in
lossy-to-lossless compression regimes.}
}



# 2018

• A. Kiely, M. Klimesh, I. Blanes, J. Ligo, E. Magli, N. Aranki, M. Burl, R. Camarero, M. Cheng, S. Dolinar, D. Dolman, G. Flesch, H. Ghassemi, M. Gilbert, Miguel Hernández-Cabronero, D. Keymeulen, M. Le, H. Luong, C. McGuiness, G. Moury, T. Pham, M. Plintovic, F. Sala, L. Santos, A. Schaar, J. Serra-Sagristà, S. Shin, B. Sundlie, D. Valsesia, R. Vitulli, E. Wong, W. Wu, H. Xie, H. Zhou,
"The new CCSDS Standard for Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression",
In Proceedings of the ESA On-Board Payload Data Compression Workshop, OBPDC, September 2018.

(Also available at the publisher's site)

This paper describes the emerging Issue 2 of the CCSDS-123.0-B standard for low-complexity compression of multispectral and hyperspectral imagery, focusing on its new features and capabilities. Most significantly, this new issue incorporates a closed-loop quantization scheme to provide near-lossless compression capability while still supporting lossless compression, and introduces a new entropy coding option that provides better compression of low-entropy data.
@inproceedings { KielyOBPDC18,
author = {A. Kiely and M. Klimesh and I. Blanes and J. Ligo and E. Magli and N.
Aranki and M. Burl and R. Camarero and M. Cheng and S. Dolinar and D. Dolman and
G. Flesch and H. Ghassemi and M. Gilbert and Miguel Hern{\'a}ndez-Cabronero and
D. Keymeulen and M. Le and H. Luong and C. McGuiness and G. Moury and T. Pham
and M. Plintovic and F. Sala and L. Santos and A. Schaar and J.
Serra-Sagrist{\a} and S. Shin and B. Sundlie and D. Valsesia and R. Vitulli and
E. Wong and W. Wu and H. Xie and H. Zhou},
title = {The new CCSDS Standard for Low-Complexity Lossless and Near-Lossless
Multispectral and Hyperspectral Image Compression},
booktitle = {In Proceedings of the ESA On-Board Payload Data Compression
Workshop, OBPDC},
year = {2018},
month = {September},
url = {https://ntrs.nasa.gov/search.jsp?R=20180006784},
abstract = {This paper describes the emerging Issue 2 of the CCSDS-123.0-B
standard for low-complexity compression of multispectral and hyperspectral
imagery, focusing on its new features and capabilities. Most significantly, this
new issue incorporates a closed-loop quantization scheme to provide
near-lossless compression capability while still supporting lossless
compression, and introduces a new entropy coding option that provides better
compression of low-entropy data.}
}


• J. Portell, Ian Blanes, Miguel Hernández-Cabronero, Joan Serra-Sagristà, R. Iudica, A. G. Villafranca,
"Prepending Spectral Decorrelating Transforms to FAPEC: A competitive High-Performance approach for Remote Sensing Data Compression",
In Proceedings of the ESA On-Board Payload Data Compression Workshop, OBPDC, September 2018.

@inproceedings { PortellOBPDC18,
author = {J. Portell and Ian Blanes and Miguel Hern{\'a}ndez-Cabronero and Joan
Serra-Sagrist{\a} and R. Iudica and A. G. Villafranca},
title = {Prepending Spectral Decorrelating Transforms to FAPEC: A competitive
High-Performance approach for Remote Sensing Data Compression},
booktitle = {In Proceedings of the ESA On-Board Payload Data Compression
Workshop, OBPDC},
year = {2018},
month = {September},
}


• Ian Blanes, Aaron Kiely, Miguel Hernández-Cabronero, Joan Serra-Sagristà,
"Performance Impact of Parameter Tuning on the Emerging CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard",
In Proceedings of the ESA On-Board Payload Data Compression Workshop, OBPDC, September 2018.

@inproceedings { BlanesOBPDC18,
author = {Ian Blanes and Aaron Kiely and Miguel Hern{\'a}ndez-Cabronero and
Joan Serra-Sagrist{\a}},
title = {Performance Impact of Parameter Tuning on the Emerging CCSDS-123.0-B-2
Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image
Compression Standard},
booktitle = {In Proceedings of the ESA On-Board Payload Data Compression
Workshop, OBPDC},
year = {2018},
month = {September},
}


• Victor Sanchez, Miguel Hernández-Cabronero,
"Graph-based Rate Control in Pathology Imaging with Lossless Region of Interest Coding",
IEEE Transactions on Medical Imaging, vol.37, no.10, pp.2211-2223, 2018.

(Full text may be available at the publisher's site)

The increasing availability of digital pathology images has motivated the design of tools to foster multidisciplinary collaboration among researchers, pathologists, and computer scientists. Telepathology plays an important role in the development of collaborative tools, as it facilities the transmission and access of pathology images by multiple users. However, the huge file size associated with pathology images usually prevents fully exploiting the potential of collaborative telepathology systems. Within this context, rate control (RC) is an important tool that allows meeting storage and bandwidth requirements by controlling the bit rate of the coded image. In this paper, we propose a novel graph-based RC algorithm with lossless region of interest (RoI) coding of pathology images. The algorithm, which is designed for block-based predictive transform coding methods, compresses the non-RoI in a lossy manner according to a target bit rate and the RoI in a lossless manner. It employs a graph where each node represents a constituent block of the image to be coded. By incorporating information about the coding cost similarities of blocks into the graph, a graph kernel is used to distribute a target bit budget among the non-RoI blocks. In order to increase RC accuracy, the algorithm uses a rate-lambda (R-$\lambda$) model to approximate the slope of the rate-distortion curve of the non-RoI, and a graph-based approach to guarantee that the target bit rate is accurately attained. The algorithm is implemented in the HEVC standard and tested over a wide range of pathology images with multiple RoIs. Evaluation results show that it outperforms other state-of-the-art methods designed for single images by very accurately attaining the target bit rate of the non-RoI.
@article { Sanchez18TMI,
author = {Victor Sanchez and Miguel Hern{\'a}ndez-Cabronero},
title = {Graph-based Rate Control in Pathology Imaging with Lossless Region of
Interest Coding},
journal = {IEEE Transactions on Medical Imaging},
year = {2018},
volume = {37},
number = {10},
pages = {2211-2223},
url =
abstract = {The increasing availability of digital pathology images has
motivated the design of tools to foster multidisciplinary collaboration among
researchers, pathologists, and computer scientists. Telepathology plays an
important role in the development of collaborative tools, as it facilities the
transmission and access of pathology images by multiple users. However, the huge
file size associated with pathology images usually prevents fully exploiting the
potential of collaborative telepathology systems. Within this context, rate
control (RC) is an important tool that allows meeting storage and bandwidth
requirements by controlling the bit rate of the coded image. In this paper, we
propose a novel graph-based RC algorithm with lossless region of interest (RoI)
coding of pathology images. The algorithm, which is designed for block-based
predictive transform coding methods, compresses the non-RoI in a lossy manner
according to a target bit rate and the RoI in a lossless manner. It employs a
graph where each node represents a constituent block of the image to be coded.
By incorporating information about the coding cost similarities of blocks into
the graph, a graph kernel is used to distribute a target bit budget among the
non-RoI blocks. In order to increase RC accuracy, the algorithm uses a
rate-lambda (R-$\lambda$) model to approximate the slope of the rate-distortion
curve of the non-RoI, and a graph-based approach to guarantee that the target
bit rate is accurately attained. The algorithm is implemented in the HEVC
standard and tested over a wide range of pathology images with multiple RoIs.
Evaluation results show that it outperforms other state-of-the-art methods
designed for single images by very accurately attaining the target bit rate of
the non-RoI.}
}


• Francesc Aulí-Linàs, Michael W. Marcellin, Victor Sanchez, Joan Bartrina-Rapesta, Miguel Hernández-Cabronero,
"Dual Link Image Coding for Earth Observation Satellites",
IEEE Transactions on Geoscience and Remote Sensing, vol.56, no.9, pp.5083-5096, 2018.

(Full text may be available at the publisher's site)

The conventional strategy to download images captured by satellites is to compress the data on board and then transmit them via the downlink. It often happens that the capacity of the downlink is too small to accommodate all the acquired data, so the images are trimmed and/or transmitted through lossy regimes. This paper introduces a coding system that increases the amount and quality of the downloaded imaging data. The main insight of this work is to use both the uplink and the downlink to code the images. The uplink is employed to send reference information to the satellite so that the on-board coding system can achieve higher efficiency. This reference information is computed on the ground, possibly employing extensive data and computational resources. The proposed system is called dual link image coding. As it is devised in this work, it is suitable for Earth observation satellites with polar orbits. Experimental results obtained for datasets acquired by the Landsat 8 satellite indicate significant coding gains with respect to conventional methods.
@article { Auli18DLIC,
author = {Francesc Aul{\'i}-Lin{\a}s and Michael W. Marcellin and Victor
Sanchez and Joan Bartrina-Rapesta and Miguel Hern{\'a}ndez-Cabronero},
title = {Dual Link Image Coding for Earth Observation Satellites},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
volume = {56},
number = {9},
pages = {5083-5096},
year = {2018},
url = {https://ieeexplore.ieee.org/document/8307760},
is to compress the data on board and then transmit them via the downlink. It
often happens that the capacity of the downlink is too small to accommodate all
the acquired data, so the images are trimmed and/or transmitted through lossy
regimes. This paper introduces a coding system that increases the amount and
quality of the downloaded imaging data. The main insight of this work is to use
send reference information to the satellite so that the on-board coding system
can achieve higher efficiency. This reference information is computed on the
ground, possibly employing extensive data and computational resources. The
proposed system is called dual link image coding. As it is devised in this work,
it is suitable for Earth observation satellites with polar orbits. Experimental
results obtained for datasets acquired by the Landsat 8 satellite indicate
significant coding gains with respect to conventional methods.}
}


• Miguel Hernández-Cabronero, Michael W. Marcellin, Ian Blanes, Joan Serra-Sagristà,
"Lossless Compression of Color Filter Array Mosaic Images with Visualization via JPEG 2000",
IEEE Transactions on Multimedia, vol.20, no.2, pp.257-270, 2018.

(Also available at the publisher's site)

Digital cameras have become ubiquitous for amateur and professional applications. The raw images captured by digital sensors typically take the form of color filter array (CFA) mosaic images, which must be "developed" (via digital signal processing) before they can be viewed. Photographers and scientists often repeat the "development process" using different parameters to obtain images suitable for different purposes. Since the development process is generally not invertible, it is commonly desirable to store the raw (or undeveloped) mosaic images indefinitely. Uncompressed mosaic image file sizes can be more than 30 times larger than those of developed images stored in JPEG format. Data compression is thus of interest. Several compression methods for mosaic images have been proposed in the literature. However, they all require a custom decompressor followed by development-specific software to generate a displayable image. In this paper, a novel compression pipeline is proposed that removes these requirements. Specifically, mosaic images can be losslessly recovered from the resulting compressed files, and, more significantly, images can be directly viewed (decompressed and developed) using only a JPEG~2000 compliant image viewer. Experiments reveal that the proposed pipeline attains excellent visual quality, while providing compression performance competitive to that of state-of-the-art compression algorithms for mosaic images.
@article { Hernandez18TMM,
author = {Miguel Hern{\'a}ndez-Cabronero and Michael W. Marcellin and Ian
Blanes and Joan Serra-Sagrist{\a}},
title = {Lossless Compression of Color Filter Array Mosaic Images with
Visualization via JPEG 2000},
journal = {IEEE Transactions on Multimedia},
year = {2018},
volume = {20},
number = {2},
pages = {257-270},
url = {http://dx.doi.org/10.1109/TMM.2017.2741426},
supplementary_url =
{http://deic.uab.cat/~mhernandez/media/imagesets/bayer_cfa_multimedia_materials.zip},
abstract = {Digital cameras have become ubiquitous for amateur and professional
applications. The raw images captured by digital sensors typically take the form
of color filter array (CFA) mosaic images, which must be "developed" (via
digital signal processing) before they can be viewed. Photographers and
scientists often repeat the "development process" using different parameters to
obtain images suitable for different purposes. Since the development process is
generally not invertible, it is commonly desirable to store the raw (or
undeveloped) mosaic images indefinitely. Uncompressed mosaic image file sizes
can be more than 30 times larger than those of developed images stored in JPEG
format. Data compression is thus of interest. Several compression methods for
mosaic images have been proposed in the literature. However, they all require a
custom decompressor followed by development-specific software to generate a
displayable image. In this paper, a novel compression pipeline is proposed that
removes these requirements. Specifically, mosaic images can be losslessly
recovered from the resulting compressed files, and, more significantly, images
can be directly viewed (decompressed and developed) using only a JPEG~2000
compliant image viewer. Experiments reveal that the proposed pipeline attains
excellent visual quality, while providing compression performance competitive to
that of state-of-the-art compression algorithms for mosaic images.}
}


• Feng Liu, Yuzhang Lin, M. Hernández-Cabronero, Eze Ahanonu, Michael W. Marcellin, Amit Ashok, Ali Bilgin,
"A Visual Discrimination Model for JPEG2000 Compression",
In Proceedings of the IEEE Data Compression Conference, DCC, pp.424-424, March 2018.

(Full text may be available at the publisher's site)

@inproceedings { Liu2018DCC,
author = {Feng Liu and Yuzhang Lin and M. Hern{\'a}ndez-Cabronero and Eze
Ahanonu and Michael W. Marcellin and Amit Ashok and Ali Bilgin},
title = {A Visual Discrimination Model for JPEG2000 Compression},
booktitle = {In Proceedings of the IEEE Data Compression Conference, DCC},
pages = {424-424},
year = {2018},
month = {March},
url = {https://ieeexplore.ieee.org/abstract/document/8416641},
}



# 2017

• Feng Liu, Miguel Hernández-Cabronero, Victor Sanchez, Michael W. Marcellin, Ali Bilgin,
"The Current Role of Image Compression Standards in Medical Imaging",
MDPI Information, vol.8, no.4, 2017.

(Also available at the publisher's site)

With the increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings.
@article { Feng17MDPI,
author = {Feng Liu and Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and
Michael W. Marcellin and Ali Bilgin},
title = {The Current Role of Image Compression Standards in Medical Imaging},
journal = {MDPI Information},
year = {2017},
volume = {8},
number = {4},
url = {http://dx.doi.org/10.3390/info8040131},
abstract = {With the increasing utilization of medical imaging in clinical
practice and the growing dimensions of data volumes generated by various medical
imaging modalities, the distribution, storage, and management of digital medical
image data sets requires data compression. Over the past few decades, several
image compression standards have been proposed by international standardization
organizations. This paper discusses the current status of these image
compression standards in medical imaging applications together with some of the
legal and regulatory issues surrounding the use of compression in medical
settings.}
}


• Sara Álvarez-Cortes, Naoufal Amrani, Miguel Hernández-Cabronero, Joan Serra-Sagristà,
"Progressive Lossy-to-Lossless Coding of Hyperspectral Images through Regression Wavelet Analysis",
Taylor & Francis International Journal of Remote Sensing, vol.39, no.7, pp.1-21, 2017.

(Full text may be available at the publisher's site)

@article { Alvarez17TFIJRS,
author = {Sara {\'A}lvarez-Cortes and Naoufal Amrani and Miguel
Hern{\'a}ndez-Cabronero and Joan Serra-Sagrist{\a}},
title = {Progressive Lossy-to-Lossless Coding of Hyperspectral Images through
Regression Wavelet Analysis},
journal = {Taylor & Francis International Journal of Remote Sensing},
pages = {1-21},
year = {2017},
volume = {39},
number = {7},
url = {http://dx.doi.org/10.1080/01431161.2017.1343515},
}


• Lee Prangnell, Miguel Hernández-Cabronero, Victor Sanchez,
"Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC",
In Proceedings of the IEEE International Conference on Image Processing, ICIP, pp.2488-2492, September 2017.

There is an increasing consumer demand for high bit-depth 4:4:4 HD video data playback due to its superior perceptual visual quality compared with standard 8-bit subsampled 4:2:0 video data. Due to vast file sizes and associated bitrates, it is desirable to compress raw high bit-depth 4:4:4 HD video sequences as much as possible without incurring a discernible decrease in visual quality. In this paper, we propose a Coding Bock (CB)-level adaptive perceptual video compression method for HEVC named Full Color Perceptual Quantization (FCPQ). FCPQ is designed to perceptually adjust the Quantization Parameter (QP) at the CB level -i.e., the luma CB and the chroma Cb and Cr CBs- according to the variances of pixel data in each CB. FCPQ is based on the default adaptive perceptual quantization method in the HEVC, known as AdaptiveQP. AdaptiveQP perceptually adjusts the QP of an entire 2Nx2N CU (i.e., one QP applied to the Y CB, the Cb CB and the Cr CB) based only on the spatial activity of the luma CB; it does not account for the spatial activity of the chroma Cb and Cr CBs. This has the potential to affect coding performance, primarily because the variance of pixel data in a luma CB is notably different from the variances of pixel data in chroma Cb and Cr CBs. FCPQ addresses this problem. In terms of coding performance, FCPQ achieves BD-Rate improvements of up to 39.5\% (Y), 16\% (Cb) and 29.9\% (Cr) compared with AdaptiveQP.
@inproceedings { Prangnell17ICIP,
author = {Lee Prangnell and Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez},
title = {Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC},
booktitle = {In Proceedings of the IEEE International Conference on Image
Processing, ICIP},
year = {2017},
pages = {2488-2492},
month = {September},
abstract = {There is an increasing consumer demand for high bit-depth 4:4:4 HD
video data playback due to its superior perceptual visual quality compared with
standard 8-bit subsampled 4:2:0 video data. Due to vast file sizes and
associated bitrates, it is desirable to compress raw high bit-depth 4:4:4 HD
video sequences as much as possible without incurring a discernible decrease in
visual quality. In this paper, we propose a Coding Bock (CB)-level adaptive
perceptual video compression method for HEVC named Full Color Perceptual
Quantization (FCPQ). FCPQ is designed to perceptually adjust the Quantization
Parameter (QP) at the CB level -i.e., the luma CB and the chroma Cb and Cr CBs-
according to the variances of pixel data in each CB. FCPQ is based on the
default adaptive perceptual quantization method in the HEVC, known as
one QP applied to the Y CB, the Cb CB and the Cr CB) based only on the spatial
activity of the luma CB; it does not account for the spatial activity of the
chroma Cb and Cr CBs. This has the potential to affect coding performance,
primarily because the variance of pixel data in a luma CB is notably different
from the variances of pixel data in chroma Cb and Cr CBs. FCPQ addresses this
problem. In terms of coding performance, FCPQ achieves BD-Rate improvements of
up to 39.5\% (Y), 16\% (Cb) and 29.9\% (Cr) compared with AdaptiveQP.}
}


• Lee Prangnell, Miguel Hernández-Cabronero, Victor Sanchez,
"Cross-Color Channel Perceptually Adaptive Quantization for HEVC",
In Proceedings of the IEEE Data Compression Conference, DCC, pp.456-456, April 2017.

(Full text may be available at the publisher's site)

HEVC includes a Coding Unit (CU) level luminance-based perceptual quantization technique known as AdaptiveQP. AdaptiveQP perceptually adjusts the Quantization Parameter (QP) at the CU level based on the spatial activity of raw input video data in a luma Coding Block (CB). In this paper, we propose a novel cross-color channel adaptive quantization scheme which perceptually adjusts the CU level QP according to the spatial activity of raw input video data in the constituent luma and chroma CBs; i.e., the combined spatial activity across all three color channels (the Y, Cb and Cr channels). Our technique is evaluated in HM 16 with 4:4:4, 4:2:2 and 4:2:0 YCbCr JCT-VC test sequences. Both subjective and objective visual quality evaluations are undertaken during which we compare our method with AdaptiveQP. Our technique achieves considerable coding efficiency improvements, with maximum BD-Rate reductions of 15.9% (Y), 13.1% (Cr) and 16.1% (Cb) in addition to a maximum decoding time reduction of 11.0%.
@inproceedings { Prangnell17DCC,
author = {Lee Prangnell and Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez},
title = {Cross-Color Channel Perceptually Adaptive Quantization for HEVC},
booktitle = {In Proceedings of the IEEE Data Compression Conference, DCC},
pages = {456-456},
year = {2017},
month = {April},
url = {http://dx.doi.org/10.1109/DCC.2017.66},
abstract = {HEVC includes a Coding Unit (CU) level luminance-based perceptual
Quantization Parameter (QP) at the CU level based on the spatial activity of raw
input video data in a luma Coding Block (CB). In this paper, we propose a novel
CU level QP according to the spatial activity of raw input video data in the
constituent luma and chroma CBs; i.e., the combined spatial activity across all
three color channels (the Y, Cb and Cr channels). Our technique is evaluated in
HM 16 with 4:4:4, 4:2:2 and 4:2:0 YCbCr JCT-VC test sequences. Both subjective
and objective visual quality evaluations are undertaken during which we compare
our method with AdaptiveQP. Our technique achieves considerable coding
efficiency improvements, with maximum BD-Rate reductions of 15.9% (Y), 13.1%
(Cr) and 16.1% (Cb) in addition to a maximum decoding time reduction of 11.0%.}
}



# 2016

• Miguel Hernández-Cabronero, Victor Sanchez, Francesc Aulí-Llinàs, Joan Serra-Sagristà,
"Fast MCT Optimization for the Compression of Whole-Slide Images",
In Proceedings of the IEEE International Conference on Image Processing, ICIP, pp.2370-2374, September 2016.

(Also available at the publisher's site)

Lossy compression techniques based on multi-component transformation (MCT) can effectively enhance the storage and transmission of whole-slide images (WSIs) without adversely affecting subsequent diagnosis processes. Component transforms that are designed for other types of images or that do not take into account all aspects of the compression algorithm applied on the transformed components yield suboptimal coding performance. Recently, an MCT optimization framework adapted to the particularities of the input WSI and the following compression was proposed, yielding superior coding performance than the state of the art. However, its time complexity is too high for practical purposes. In this work FastOptimizeMCT, a fast version of this framework based on smart sampling of regions depicting tissue, is proposed. Exhaustive experimental evidence indicates that FastOptimizeMCT exhibits reasonable time complexity results -similar to that of scanning the WSIs- and coding performance that outperforms the KLT and the OST by 1.47 dB and 1.07 dB, respectively
@inproceedings { Hernandez16ICIP,
author = {Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and Francesc
Aul{\'i}-Llin{\a}s and Joan Serra-Sagrist{\a}},
title = {Fast MCT Optimization for the Compression of Whole-Slide Images},
booktitle = {In Proceedings of the IEEE International Conference on Image
Processing, ICIP},
year = {2016},
pages = {2370-2374},
month = {September},
url = {http://ieeexplore.ieee.org/document/7532783/},
abstract = {Lossy compression techniques based on multi-component
transformation (MCT) can effectively enhance the storage and transmission of
whole-slide images (WSIs) without adversely affecting subsequent diagnosis
processes. Component transforms that are designed for other types of images or
that do not take into account all aspects of the compression algorithm applied
on the transformed components yield suboptimal coding performance. Recently, an
MCT optimization framework adapted to the particularities of the input WSI and
the following compression was proposed, yielding superior coding performance
than the state of the art. However, its time complexity is too high for
practical purposes. In this work FastOptimizeMCT, a fast version of this
framework based on smart sampling of regions depicting tissue, is proposed.
Exhaustive experimental evidence indicates that FastOptimizeMCT exhibits
reasonable time complexity results -similar to that of scanning the WSIs- and
coding performance that outperforms the KLT and the OST by 1.47 dB and 1.07 dB,
respectively}
}


• Sara Álvarez-Cortés, Naoufal Amrani, Miguel Hernández-Cabronero, Joan Serra-Sagristà,
"Progressive-Lossy-to-Lossless Coding of Hyperspectral Images",
In Proceedings of the ESA On-Board Payload Data Compression Workshop, OBPDC, September 2016.

@inproceedings { Alvarez16,
author = {Sara {\'A}lvarez-Cort{\'e}s and Naoufal Amrani and Miguel
Hern{\'a}ndez-Cabronero and Joan Serra-Sagrist{\a}},
title = {Progressive-Lossy-to-Lossless Coding of Hyperspectral Images},
booktitle = {In Proceedings of the ESA On-Board Payload Data Compression
Workshop, OBPDC},
year = {2016},
month = {September},
}


• Miguel Hernández-Cabronero, Victor Sanchez, Francesc Aulí-Llinàs, Joan Serra-Sagristà,
"Lossy Compression of Natural HDR Content Based on Multi-Component Transform Optimization",
In Proceedings of the IEEE Digital Media Industry and Academy Forum, DMIAF, pp.23-28, July 2016.

(Also available at the publisher's site)

Linear multi-component transforms (MCTs) are commonly employed for enhancing the coding performance for the compression of natural color images. Popular MCTs such as the RGB to Y'CbCr transform are not optimized specifically for any given input image. Data-dependent transforms such as the Karhunen-Loève Transform (KLT) or the Optimal Spectral Transform (OST) optimize some analytical criteria (e.g., the inter-component correlation or mutual information), but do not consider all aspects of the coding system applied to the transformed components. Recently, a framework that produces optimized MCTs dependent on the input image and the subsequent coding system was proposed for 8-bit pathology whole-slide images. This work extends this framework to higher bitdepths and investigate its performance for different types of high-dynamic range (HDR) contents. Experimental results indicate that the optimized MCTs yield average PSNR results 0.17%, 0.47% and 0.63% higher than those of the KLT for raw mosaic images, reconstructed HDR radiance scenes and color graded HDR contents, respectively.
@inproceedings { Hernandez16DMIAF,
author = {Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and Francesc
Aul{\'i}-Llin{\a}s and Joan Serra-Sagrist{\a}},
title = {Lossy Compression of Natural HDR Content Based on Multi-Component
Transform Optimization},
booktitle = {In Proceedings of the IEEE Digital Media Industry and Academy
Forum, DMIAF},
year = {2016},
month = {July},
pages = {23-28},
url = {http://ieeexplore.ieee.org/document/7574894/},
abstract = {Linear multi-component transforms (MCTs) are commonly employed for
enhancing the coding performance for the compression of natural color images.
Popular MCTs such as the RGB to Y'CbCr transform are not optimized specifically
for any given input image. Data-dependent transforms such as the Karhunen-Loève
Transform (KLT) or the Optimal Spectral Transform (OST) optimize some analytical
criteria (e.g., the inter-component correlation or mutual information), but do
not consider all aspects of the coding system applied to the transformed
components. Recently, a framework that produces optimized MCTs dependent on the
input image and the subsequent coding system was proposed for 8-bit pathology
whole-slide images. This work extends this framework to higher bitdepths and
investigate its performance for different types of high-dynamic range (HDR)
contents. Experimental results indicate that the optimized MCTs yield average
PSNR results 0.17%, 0.47% and 0.63% higher than those of the KLT for raw mosaic
respectively.}
}


• Miguel Hernández-Cabronero, Ian Blanes, Armando J. Pinho, Michael W. Marcellin, Joan Serra-Sagristà,
"Progressive Lossy-to-Lossless Compression of DNA Microarray Images",
IEEE Signal Processing Letters, vol.23, no.5, pp.698-702, May 2016.

(Also available at the publisher's site)

The analysis techniques applied to DNA microarray images are under active development. As new techniques become available, it will be useful to apply them to existing microarray images to obtain more accurate results. The compression of these images can be a useful tool to alleviate the costs associated to their storage and transmission. The recently proposed Relative Quantizer (RQ) coder provides the most competitive lossy compression ratios while introducing only acceptable changes in the images. However, images compressed with the RQ coder can only be reconstructed with a limited quality, determined before compression. In this work, a progressive lossy-to lossless scheme is presented to solve this problem. Firstly, the regular structure of the RQ intervals is exploited to define a lossy-to-lossless coding algorithm called the Progressive RQ (PRQ) coder. Secondly, an enhanced version that prioritizes a region of interest, called the PRQ-ROI coder, is described. Experiments indicate that the PRQ coder offers progressivity with lossless and lossy coding performance almost identical to the best techniques in the literature, none of which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding results with better rate-distortion performance than both the RQ and PRQ coders.
@article { Hernandez16SPL,
author = {Miguel Hern{\'a}ndez-Cabronero and Ian Blanes and Armando J. Pinho
and Michael W. Marcellin and Joan Serra-Sagrist{\a}},
title = {Progressive Lossy-to-Lossless Compression of DNA Microarray Images},
journal = {IEEE Signal Processing Letters},
year = {2016},
month = {May},
number = {5},
pages = {698-702},
volume = {23},
url = {http://dx.doi.org/10.1109/LSP.2016.2547893},
abstract = {The analysis techniques applied to DNA microarray images are under
active development. As new techniques become available, it will be useful to
apply them to existing microarray images to obtain more accurate results. The
compression of these images can be a useful tool to alleviate the costs
associated to their storage and transmission. The recently proposed Relative
Quantizer (RQ) coder provides the most competitive lossy compression ratios
while introducing only acceptable changes in the images. However, images
compressed with the RQ coder can only be reconstructed with a limited quality,
determined before compression. In this work, a progressive lossy-to lossless
scheme is presented to solve this problem. Firstly, the regular structure of the
RQ intervals is exploited to define a lossy-to-lossless coding algorithm called
the Progressive RQ (PRQ) coder. Secondly, an enhanced version that prioritizes a
region of interest, called the PRQ-ROI coder, is described. Experiments indicate
that the PRQ coder offers progressivity with lossless and lossy coding
performance almost identical to the best techniques in the literature, none of
which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding
results with better rate-distortion performance than both the RQ and PRQ
coders.}
}


• Victor Sanchez, Miguel Hernández-Cabronero, Francesc Aulí-Llinàs, Joan Serra-Sagristà,
"Fast Lossless Compression Of Whole Slide Pathology Images Using HEVC Intra-Prediction",
In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, March 2016.

(Full text may be available at the publisher's site)

The lossless compression of Whole Slide pathology Images (WSIs) using HEVC is investigated in this paper. Recently proposed intra-prediction algorithms based on differential pulse-code modulation (DPCM) and edge prediction provide significant bitrate improvements for a wide range of natural and screen content sequences, including WSIs. However, coding times remain relatively high due to the high number (35) of modes to be tested. In this paper, FastIntra, a novel method that requires testing only four modes is proposed. Among these four modes, FastIntra introduces a novel median edge predictor designed to accurately predict edges in different directionalities. Performance evaluations on various WSIs show average compression time reductions of 23.5% with important lossless coding improvements as compared to current block-wise intra-prediction and DPCM-based methods.
@inproceedings { Hernandez16ICASSP,
author = {Victor Sanchez and Miguel Hern{\'a}ndez-Cabronero and Francesc
Aul{\'i}-Llin{\a}s and Joan Serra-Sagrist{\a}},
title = {Fast Lossless Compression Of Whole Slide Pathology Images Using HEVC
Intra-Prediction},
booktitle = {In Proceedings of the IEEE International Conference on Acoustics,
Speech and Signal Processing, ICASSP},
year = {2016},
month = {March},
url = {http://ieeexplore.ieee.org/document/7471918/},
abstract = {The lossless compression of Whole Slide pathology Images (WSIs)
using HEVC is investigated in this paper. Recently proposed intra-prediction
algorithms based on differential pulse-code modulation (DPCM) and edge
prediction provide significant bitrate improvements for a wide range of natural
and screen content sequences, including WSIs. However, coding times remain
relatively high due to the high number (35) of modes to be tested. In this
paper, FastIntra, a novel method that requires testing only four modes is
proposed. Among these four modes, FastIntra introduces a novel median edge
predictor designed to accurately predict edges in different directionalities.
Performance evaluations on various WSIs show average compression time reductions
of 23.5% with important lossless coding improvements as compared to current
block-wise intra-prediction and DPCM-based methods.}
}


• Miguel Hernández-Cabronero, Francesc Aulí-Llinàs, Victor Sanchez, Joan Serra-Sagristà,
"Transform Optimization for the Lossy Coding of Pathology Whole-Slide Images",
In proceedings of the IEEE Data Compression Conference, DCC, pp.131--140, March 2016.

(Also available at the publisher's site)

Whole-slide images (WSIs) are high-resolution, 2D, color digital images that are becoming valuable tools for pathologists in clinical, research and formative scenarios. However, their massive size is hindering their widespread adoption. Even though lossy compression can effectively reduce compressed file sizes without affecting subsequent diagnoses, no lossy coding scheme tailored for WSIs has been described in the literature. In this paper, a novel strategy called OptimizeMCT is proposed to increase the lossy coding performance for this type of images. In particular, an optimization method is designed to find image-specific multi-component transforms (MCTs) that exploit the high inter-component correlation present in WSIs. Experimental evidence indicates that the transforms yielded by OptimizeMCT consistently attain better coding performance than the Karhunen-Loève Transform (KLT) for all tested lymphatic, pancreatic and renal WSIs. More specifically, images reconstructed at the same bitrate exhibit average PSNR values 2.85 dB higher for OptimizeMCT than for the KLT, with differences of up to 5.17dB.
@inproceedings { Hernandez16DCC,
author = {Miguel Hern{\'a}ndez-Cabronero and Francesc Aul{\'i}-Llin{\a}s and
Victor Sanchez and Joan Serra-Sagrist{\a}},
title = {Transform Optimization for the Lossy Coding of Pathology Whole-Slide
Images},
booktitle = {In proceedings of the IEEE Data Compression Conference, DCC},
year = {2016},
month = {March},
pages = {131--140},
url = {http://ieeexplore.ieee.org/document/7786157/},
abstract = {Whole-slide images (WSIs) are high-resolution, 2D, color digital
images that are becoming valuable tools for pathologists in clinical, research
and formative scenarios. However, their massive size is hindering their
compressed file sizes without affecting subsequent diagnoses, no lossy coding
scheme tailored for WSIs has been described in the literature. In this paper, a
novel strategy called OptimizeMCT is proposed to increase the lossy coding
performance for this type of images. In particular, an optimization method is
designed to find image-specific multi-component transforms (MCTs) that exploit
the high inter-component correlation present in WSIs. Experimental evidence
indicates that the transforms yielded by OptimizeMCT consistently attain better
coding performance than the Karhunen-Loève Transform (KLT) for all tested
lymphatic, pancreatic and renal WSIs. More specifically, images reconstructed at
the same bitrate exhibit average PSNR values 2.85 dB higher for OptimizeMCT than
for the KLT, with differences of up to 5.17dB.}
}


• Naoufal Amrani, Joan Serra-Sagristà, Miguel Hernández-Cabronero, Michael W. Marcellin,
"Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data",
In proceedings of the IEEE Data Compression Conference, DCC, pp.121-130, March 2016.

(Full text may be available at the publisher's site)

Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet-transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain. For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
@inproceedings { Naoufal16DCC,
author = {Naoufal Amrani and Joan Serra-Sagrist{\a} and Miguel
Hern{\'a}ndez-Cabronero and Michael W. Marcellin},
title = {Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding
of Remote-Sensing Data},
booktitle = {In proceedings of the IEEE Data Compression Conference, DCC},
year = {2016},
month = {March},
pages = {121-130},
url = {http://ieeexplore.ieee.org/document/7786156/},
abstract = {Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme
for coding hyperspectral images that employs multiple regression analysis to
exploit the relationships among spectral wavelet-transformed components. The
scheme is based on a pyramidal prediction, using different regression models, to
increase the statistical independence in the wavelet domain. For lossless
coding, RWA has proven to be superior to other spectral transform like PCA and
to the best and most recent coding standard in remote sensing, CCSDS-123.0. In
this paper we show that RWA also allows progressive lossy-to-lossless (PLL)
coding and that it attains a rate-distortion performance superior to those
obtained with state-of-the-art schemes. To take into account the predictive
significance of the spectral components, we propose a Prediction Weighting
scheme for JPEG2000 that captures the contribution of each transformed component
to the prediction process.}
}


• Miguel Hernández-Cabronero, Ian Blanes, Armando J. Pinho, Michael W. Marcellin, Joan Serra-Sagristà,
"Analysis-Driven Lossy Compression of DNA Microarray Images",
IEEE Transactions on Medical Imaging, vol.35, no.2, pp.654-664, February 2016.

(Also available at the publisher's site)

DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1.
@article { Hernandez16TMI,
author = {Miguel Hern{\'a}ndez-Cabronero and Ian Blanes and Armando J. Pinho
and Michael W. Marcellin and Joan Serra-Sagrist{\a}},
title = {Analysis-Driven Lossy Compression of DNA Microarray Images},
journal = {IEEE Transactions on Medical Imaging},
year = {2016},
month = {February},
number = {2},
pages = {654-664},
volume = {35},
url = {http://dx.doi.org/10.1109/TMI.2015.2489262},
abstract = {DNA microarrays are one of the fastest-growing new technologies in
the field of genetic research, and DNA microarray images continue to grow in
number and size. Since analysis techniques are under active and ongoing
development, storage, transmission and sharing of DNA microarray images need be
addressed, with compression playing a significant role. However, existing
lossless coding algorithms yield only limited compression performance
(compression ratios below 2:1), whereas lossy coding methods may introduce
unacceptable distortions in the analysis process. This work introduces a novel
Relative Quantizer (RQ), which employs non-uniform quantization intervals
designed for improved compression while bounding the impact on the DNA
microarray analysis. This quantizer constrains the maximum relative error
introduced into quantized imagery, devoting higher precision to pixels critical
to the analysis process. For suitable parameter choices, the resulting
variations in the DNA microarray analysis are less than half of those inherent
to the experimental variability. Experimental results reveal that appropriate
analysis can still be performed for average compression ratios exceeding 4.5:1.}
}



# 2015

• Ian Blanes, Miguel Hernández-Cabronero, Francesc Aulí-Llinàs, Joan Serra-Sagristà, Michael W. Marcellin,
"Isorange Pairwise Orthogonal Transform",
IEEE Transactions on Geoscience and Remote Sensing, vol.53, no.6, pp.3361-3372, June 2015.

(Full text may be available at the publisher's site)

@article { Blanes15,
author = {Ian Blanes and Miguel Hern{\'a}ndez-Cabronero and Francesc
Aul{\'i}-Llin{\a}s and Joan Serra-Sagrist{\a} and Michael W. Marcellin},
title = {Isorange Pairwise Orthogonal Transform},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
year = {2015},
month = {June},
number = {6},
pages = {3361-3372},
volume = {53},
url = {http://dx.doi.org/10.1109/TGRS.2014.2374473},
}



# 2014

• Miguel Hernández-Cabronero, Victor Sanchez, Michael W. Marcellin, Joan Serra-Sagristà,
"Compression of DNA Microarray Images",
In Book "Microarray Image and Data Analysis: Theory and Practice", CRC Press, pp.193-222, 2014.

(Full text may be available at the publisher's site)

@book { Hernandez14RuedaBook,
author = {Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and Michael W.
Marcellin and Joan Serra-Sagrist{\a}},
title = {Compression of DNA Microarray Images},
editor = {Luis Rueda},
publisher = {CRC Press},
year = {2014},
booktitle = {In Book "Microarray Image and Data Analysis: Theory and Practice",
CRC Press},
pages = {193-222},
url = {http://www.crcpress.com/product/isbn/9781466586826},
}



# 2013

• Miguel Hernández-Cabronero, Victor Sanchez, Michael W. Marcellin, Joan Serra-Sagristà,
"A distortion metric for the lossy compression of DNA microarray images",
In proceedings of the IEEE Data Compression Conference, DCC, pp.171-180, 2013.

(Also available at the publisher's site)

DNA microarrays are state-of-the-art tools in biological and medical research. In this work, we discuss the suitability of lossy compression for DNA microarray images and highlight the necessity for a distortion metric to assess the loss of relevant information. We also propose one possible metric that considers the basic image features employed by most DNA microarray analysis techniques. Experimental results indicate that the proposed metric can identify and differentiate important and unimportant changes in DNA microarray images
@inproceedings { Hernandez13DCC,
author = {Miguel Hern{\'a}ndez-Cabronero and Victor Sanchez and Michael W.
Marcellin and Joan Serra-Sagrist{\a}},
title = {A distortion metric for the lossy compression of DNA microarray
images},
booktitle = {In proceedings of the IEEE Data Compression Conference, DCC},
year = {2013},
pages = {171-180},
url = {http://dx.doi.org/10.1109/DCC.2013.26},
abstract = {DNA microarrays are state-of-the-art tools in biological and
medical research. In this work, we discuss the suitability of lossy compression
for DNA microarray images and highlight the necessity for a distortion metric to
assess the loss of relevant information. We also propose one possible metric
that considers the basic image features employed by most DNA microarray analysis
techniques. Experimental results indicate that the proposed metric can identify
and differentiate important and unimportant changes in DNA microarray images}
}



# 2012

• Miguel Hernández-Cabronero, Juan Muñoz-Gómez, Ian Blanes, Joan Serra-Sagristà, Michael W. Marcellin,
"DNA microarray image coding",
In proceedings of the IEEE Data Compression Conference, DCC, pp.32-41, 2012.

(Also available at the publisher's site)

DNA microarrays are useful to identify the function and regulation of a large number of genes in a single experiment, even whole genomes. In this work, we analyze the relationship between DNA microarray image histograms and the compression performance of lossless JPEG2000. Also, a reversible transform based on histogram swapping is proposed. Intensive experimental results using different coding parameters are discussed. Results suggest that this transform improves previous lossless JPEG2000 results on all DNA microarray image sets.
@inproceedings { Hernandez12DCC,
author = {Miguel Hern{\'a}ndez-Cabronero and Juan Mu{\~n}oz-G{\'o}mez and Ian
Blanes and Joan Serra-Sagrist{\a} and Michael W. Marcellin},
title = {DNA microarray image coding},
booktitle = {In proceedings of the IEEE Data Compression Conference, DCC},
year = {2012},
pages = {32-41},
doi = {10.1109/DCC.2012.11},
url = {http://dx.doi.org/10.1109/DCC.2012.11},
abstract = {DNA microarrays are useful to identify the function and regulation
of a large number of genes in a single experiment, even whole genomes. In this
work, we analyze the relationship between DNA microarray image histograms and
the compression performance of lossless JPEG2000. Also, a reversible transform
based on histogram swapping is proposed. Intensive experimental results using
different coding parameters are discussed. Results suggest that this transform
improves previous lossless JPEG2000 results on all DNA microarray image sets.}
}


• Miguel Hernández-Cabronero, Francesc Aulí-Llinàs, Joan Bartrina-Rapesta, Ian Blanes, Leandro Jiménez-Rodríguez, Michael W. Marcellin, Juan Muñoz-Gómez, Victor Sanchez, Joan Serra-Sagristà, Zhongwei Xu,
"Multicomponent compression of DNA microarray images",
In Proceedings of the CEDI Workshop on Multimedia Data Coding and Transmission, WMDCT, September 2012.

(Also available at the publisher's site)

In this work, the correlation present among pairs of DNA microarray images is analyzed using Pearson's r as a metric. A certain amount of correlation is found, especially for red/green channel image pairs, with averages over 0.75 for all benchmark sets. Based on that, the lossless multicomponent compression features of JPEG2000 have been tested on each set, considering different spectral and spatial transforms (DWT 5/3, DPCM, R-Haar and POT). Improvements of up to 0.6~bpp are obtained depending on the transform considered, and these improvements are consistent to the correlation values observed.
@inproceedings { Hernandez12Sarteco,
author = {Miguel Hern{\'a}ndez-Cabronero and Francesc Aul{\'i}-Llin{\a}s and
Joan Bartrina-Rapesta and Ian Blanes and Leandro Jim{\'e}nez-Rodr{\'i}guez and
Michael W. Marcellin and Juan Mu{\~n}oz-G{\'o}mez and Victor Sanchez and Joan
Serra-Sagrist{\a} and Zhongwei Xu},
title = {Multicomponent compression of DNA microarray images},
booktitle = {In Proceedings of the CEDI Workshop on Multimedia Data Coding and
Transmission, WMDCT},
year = {2012},
month = {September},
abstract = {In this work, the correlation present among pairs of DNA microarray
images is analyzed using Pearson's r as a metric. A certain amount of
correlation is found, especially for red/green channel image pairs, with
averages over 0.75 for all benchmark sets. Based on that, the lossless
multicomponent compression features of JPEG2000 have been tested on each set,
considering different spectral and spatial transforms (DWT 5/3, DPCM, R-Haar and
POT). Improvements of up to 0.6~bpp are obtained depending on the transform
considered, and these improvements are consistent to the correlation values
observed.}
}


• Miguel Hernández-Cabronero, Ian Blanes, Michael W. Marcellin, Joan Serra-Sagristà,
"Standard and specific compression techniques for DNA microarray images",
MDPI Algorithms, vol.4, pp.30-49, 2012.

(Also available at the publisher's site)

We review the state of the art in DNA microarray image compression and provide original comparisons between standard and microarray-specific compression techniques that validate and expand previous work. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution, and then we summarize the compression results reported for these microarray-specific image compression schemes. In a set of experiments conducted for this paper, we obtain new results for several popular image coding techniques that include the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS are the best performing standard compressors, but are improved upon by the best microarray-specific technique, Battiato's CNN-based scheme.
@article { Hernandez11Algorithms,
author = {Miguel Hern{\'a}ndez-Cabronero and Ian Blanes and Michael W.
Marcellin and Joan Serra-Sagrist{\a}},
title = {Standard and specific compression techniques for DNA microarray
images},
journal = {MDPI Algorithms},
year = {2012},
pages = {30-49},
volume = {4},
doi = {10.3390/a5010030},
url = {http://www.mdpi.com/1999-4893/5/1/30/},
abstract = {We review the state of the art in DNA microarray image compression
and provide original comparisons between standard and microarray-specific
compression techniques that validate and expand previous work. First, we
describe the most relevant approaches published in the literature and classify
them according to the stage of the typical image compression process where each
approach makes its contribution, and then we summarize the compression results
reported for these microarray-specific image compression schemes. In a set of
experiments conducted for this paper, we obtain new results for several popular
image coding techniques that include the most recent coding standards.
Prediction-based schemes CALIC and JPEG-LS are the best performing standard
compressors, but are improved upon by the best microarray-specific technique,
Battiato's CNN-based scheme.}
}



# 2011

• Miguel Hernández-Cabronero, Ian Blanes, Michael W. Marcellin, Joan Serra-Sagristà,
"A review of DNA microarray image compression",
In Proceedings of the International Conference on Data Compression, Communication and Processing, CCP, pp.139-147, June 2011.

(Also available at the publisher's site)

We review the state of the art in DNA microarray image compression. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution. We then summarize the compression results reported for these microarray-specific image compression schemes. In a set of experiments conducted for this paper, we obtain results for several popular image coding techniques, including the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS, and JPEG2000 using zero wavelet decomposition levels are the best performing standard compressors, but are all outperformed by the best microarray-specific technique, Battiato's CNN-based scheme.
@inproceedings { Hernandez11CCP,
author = {Miguel Hern{\'a}ndez-Cabronero and Ian Blanes and Michael W.
Marcellin and Joan Serra-Sagrist{\a}},
title = {A review of DNA microarray image compression},
booktitle = {In Proceedings of the International Conference on Data
Compression, Communication and Processing, CCP},
year = {2011},
month = {June},
pages = {139-147},
publisher = {IEEE},
doi = {10.1109/CCP.2011.21},
url = {http://dx.doi.org/10.1109/CCP.2011.21},
abstract = {We review the state of the art in DNA microarray image compression.
First, we describe the most relevant approaches published in the literature and
classify them according to the stage of the typical image compression process
where each approach makes its contribution. We then summarize the compression
results reported for these microarray-specific image compression schemes. In a
set of experiments conducted for this paper, we obtain results for several
popular image coding techniques, including the most recent coding standards.
Prediction-based schemes CALIC and JPEG-LS, and JPEG2000 using zero wavelet
decomposition levels are the best performing standard compressors, but are all
outperformed by the best microarray-specific technique, Battiato's CNN-based
scheme.}
}