DCT image codec using variance of sub-regions

Pooneh Bagheri Zadeh*, Akbar Sheikh Akbari, Tom Buggy

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    50 Downloads (Pure)


    This paper presents a novel variance of subregions and discrete cosine transform based image-coding scheme. The proposed encoder divides the input image into a number of non-overlapping blocks. The coefficients in each block are then transformed into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency index at different blocks are put together generating a number of matrices, where each matrix contains coefficients of a particular spatial frequency index. The matrix containing DC coefficients is losslessly coded to preserve its visually important information. Matrices containing high frequency coefficients are coded using a variance of sub-regions based encoding algorithm proposed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. An extension of the system to the progressive image transmission is also developed. The proposed coding scheme, JPEG and JPEG2000were applied to a number of test images. Results show that the proposed coding scheme outperforms JPEG and JPEG2000 subjectively and objectively at low compression ratios. Results also indicate that the proposed codec decoded images exhibit superior subjective quality at high compression ratios compared to that of JPEG, while offering satisfactory results to that of JPEG2000.
    Original languageEnglish
    Pages (from-to)13-21
    Number of pages9
    JournalOpen Computer Science
    Issue number1
    Publication statusPublished - 11 Aug 2015


    • discrete cosine transform
    • image compression
    • perceptual weights
    • quad-tree coding
    • variance of sub-regions


    Dive into the research topics of 'DCT image codec using variance of sub-regions'. Together they form a unique fingerprint.

    Cite this