Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Image Forgery Localization via Block-Grained Analysis of JPEG Artifacts pdf
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Image Forgery Localization via Block-Grained Analysis of JPEG Artifacts

[attachment=38802]

Abstract

In this paper, we propose a forensic algorithm to discriminate
between original and forged regions in JPEG images,
under the hypothesis that the tampered image presents a double
JPEG compression, either aligned (A-DJPG) or nonaligned (NADJPG).
Unlike previous approaches, the proposed algorithm does
not need to manually select a suspect region in order to test the
presence or the absence of double compression artifacts. Based on
an improved and unified statistical model characterizing the artifacts
that appear in the presence of both A-DJPG or NA-DJPG,
the proposed algorithm automatically computes a likelihood map
indicating the probability for each 8 8 discrete cosine transform
block of being doubly compressed. The validity of the proposed approach
has been assessed by evaluating the performance of a detector
based on thresholding the likelihood map, considering different
forensic scenarios. The effectiveness of the proposed method
is also confirmed by tests carried on realistic tampered images. An
interesting property of the proposed Bayesian approach is that it
can be easily extended to work with traces left by other kinds of
processing.

INTRODUCTION

THE diffusion of tampered visual contents through the digital
world is becoming increasing and worrying, due to the
large availability of simple and effective image and video processing
tools. Because of this issue, in many fields like insurance,
law and order, journalism, and medical applications, there
is an increasing interest in tools allowing us to grant the credibility
of digital images as sources of information. As a possible
answer to this request, many image forensic techniques have
been proposed to detect the presence of forgeries in digital images
through the analysis of the presence or of the perturbation
of some traces that remain in digital content, during the creation
process or any other successive processing [1].

FORENSIC SCENARIOS

One of the most evident traces of tampering in JPEG images
is the presence of artifacts due to double compression. Basically,
such artifacts can be categorized into two classes, according to
whether the second JPEG compression uses a DCT grid aligned
with the one of the first compression or not. However, in order to
correctly interpret the presence or the absence of these artifacts,
different scenarios have to be considered.
A first scenario is that in which an original JPEG image,
after some localized forgery, is saved again in JPEG format.We
can assume that during forgery an image processing technique
which disrupts JPEG compression statistics is applied. Examples
of this kind of manipulation are a cut and paste from either
a noncompressed image or a resized image, or the insertion
of computer generated content. In this case, DCT coefficients
of unmodified areas will undergo a double JPEG compression
thus exhibiting double quantization (DQ) artifacts, while DCT
coefficients of forged areas will likely not have the same DQ
artifacts.

DOUBLE JPEG COMPRESSION MODELS

In this section, we will review the statistical models used
to characterize both A-DJPG and NA-DJPG artifacts. Then,
we will introduce some simplifications that will be useful in
defining the proposed detection algorithm, as well as some
modifications needed to take into account the effects of
rounding and truncation errors between the first compression
and the second compression. The notation used hereafter is
summarized in Table I.

JPEG Compression Model

The JPEG compression algorithm can be modeled by three
basic steps [17]: 8 8 block DCT of the image pixels, uniform
quantization of DCT coefficients with a quantization matrix
whose values depend on a quality factor , and entropy
encoding of the quantized values. The image resulting from decompression
will be obtained by applying the inverse of each
step in reverse order: entropy decoding, dequantization, inverse
block DCT. In the following analysis, we will consider that
quantization is achieved by dividing each DCT coefficient by a
proper quantization step and rounding the result to the nearest
integer, whereas dequantization is achieved by simply multiplying
by .