Hardware Implementation of a Digital Watermarking System for Video Authentication

Published on July 2016 | Categories: Types, School Work | Downloads: 28 | Comments: 0 | Views: 288
of 13
Download PDF   Embed   Report

detailed about video authentication

Comments

Content

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

289

Hardware Implementation of a Digital
Watermarking System for Video Authentication
Sonjoy Deb Roy, Xin Li, Yonatan Shoshan, Alexander Fish, Member, IEEE, and Orly Yadid-Pecht, Fellow, IEEE

Abstract—This paper presents a hardware implementation of a
digital watermarking system that can insert invisible, semifragile
watermark information into compressed video streams in real
time. The watermark embedding is processed in the discrete
cosine transform domain. To achieve high performance, the
proposed system architecture employs pipeline structure and uses
parallelism. Hardware implementation using field programmable
gate array has been done, and an experiment was carried out
using a custom versatile breadboard for overall performance evaluation. Experimental results show that a hardware-based video
authentication system using this watermarking technique features
minimum video quality degradation and can withstand certain
potential attacks, i.e., cover-up attacks, cropping, and segment removal on video sequences. Furthermore, the proposed hardwarebased watermarking system features low power consumption, low
cost implementation, high processing speed, and reliability.
Index Terms—Digital video watermarking, hardware implementation, real-time data hiding, very large scale integration
(VLSI), video authentication.

I. Introduction

R

ECENTLY, the advances in electronic and information
technology, together with the rapid growth of techniques
for powerful digital signal and multimedia processing, have
made the distribution of video data much easier and faster
[1]–[3]. However, concerns regarding authentication of the
digital video are mounting, since digital video sequences
are very susceptible to manipulations and alterations using
widely available editing tools. This issue turns to be more
significant when the video sequence is to be used as evidence.
In such cases, the video data should be credible. Consequently,
authentication techniques are needed in order to maintain
authenticity, integrity, and security of digital video content. As
a result, digital watermarking (WM), a data hiding technique
has been considered as one of the key authentication methods
[4], [5]. Digital watermarking is the process of embedding an

Manuscript received February 15, 2011; revised November 7, 2011;
accepted April 13, 2012. Date of publication June 8, 2012; date of current
version February 1, 2013. This paper was recommended by Associate Editor
M. Berekovic.
S. D. Roy, X. Li, and O. Yadid-Pecht are with the Integrated Sensors
Intelligent System Laboratory, Department of Electrical and Computer
Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail:
[email protected]).
Y. Shoshan is with Texas Instruments, Inc., Raanana 43662, Israel (e-mail:
[email protected]).
A. Fish is with the VLSI Systems Center, Ben-Gurion University, BeerSheva 84105, Israel (e-mail: [email protected]).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCSVT.2012.2203738

additional, identifying information within a host multimedia
object, such as text, audio, image, or video. By adding a
transparent watermark to the multimedia content, it is possible
to detect hostile alterations, as well as to verify the integrity
and the ownership of the digital media.
Today, digital video WM techniques are widely used in various video applications [2]–[7]. For video authentication, WM
can ensure that the original content has not been altered. WM
is used in fingerprinting to track back a malicious user and also
in a copy control system with WM capability to prevent unauthorized copying [2], [5]. Because of its commercial potential
applications, current digital WM techniques have focused on
multimedia data and in particular on video contents. Over the
past few years, researchers have investigated the embedding
process of visible or invisible digital watermarks into raw
digital video [7], uncompressed digital video both on software
[7]–[11], and hardware platforms [12]–[16]. Contrary to still
image WM techniques, new problems and new challenges have
emerged in video WM applications. Shoshan et al. [4] and
Li et al. [5] presented an overview of the various existing
video WM techniques and showed their features and specific
requirements, possible applications, benefits, and drawbacks.
The main objective of this paper is to describe an efficient
hardware-based concept of a digital video WM system, which
features low power consumption, efficient and low cost implementation, high processing speed, reliability and invisible,
and semifragile watermarking in compressed video streams. It
works in the discrete cosine transform (DCT) domain in real
time. The proposed WM system can be integrated with video
compressor unit, and it achieves performance that matches
complex software algorithms [17] within a simple efficient
hardware implementation. The system also features minimum
video quality degradation and can withstand certain potential
attacks, i.e., cover-up attacks, cropping, segment removal on
video sequences. The above-mentioned design objectives were
achieved via combined parallel hardware architecture with
a simplified design approach of each of the components.
This can enhance the suitability of this design approach to
fit easily in devices that require high tampering resistance,
such as surveillance cameras and video protection apparatus.
The proposed WM system is implemented using the Verilog
hardware description language (HDL) synthesized into a field
programming gate array (FPGA) and then experimented using
a custom versatile breadboard for performance evaluation.
The remainder of this paper is organized as follows.
Section II provides a survey on the previous related work

c 2012 IEEE
1051-8215/$31.00 

290

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

on video WM technologies. The details of the proposed
novel video WM system solution are described in Section III.
Section IV presents the hardware architecture of the proposed
video WM system, followed by a description of the FPGAbased prototyping of the hardware architecture. Section V
discusses the experimental setup and verification methodology
used to analyze the FPGA experimental results, which is followed by comparisons with existing approaches. Conclusions
are presented in Section VI.

II. Related Work on Video Watermarking Systems
A. Robustness Level of WM for Video Authentication
The level of robustness of the WM can be categorized
into three main divisions: fragile, semifragile, and robust. A
watermark is called fragile if it fails to be detectable after
the slightest modification. A watermark is called robust if
it resists a designated class of transformations. A semifragile
watermark is the one that is able to withstand certain legitimate
modifications, but cannot resist malicious transformations [36],
[42]. There is no absolute robustness scale and the definition is
very much dependent on the requirements of the applications
at hand, as well as the set of possible attacks. Different
applications will have different requirements.
In copyright protected applications, the attacker wishes to
remove the WM without causing severe damage to the image.
This can be done in various ways, including digital-to-analog
and analog-to-digital conversions, cropping, scaling, segment
removal, and others [32], [33]. Robust WM is used in these
applications so that it remains detectable even after these
attacks are applied, provided that the host image is not severely
damaged. For image integrity applications, fragile watermarks
are commonly used so that it can detect even the slightest
change in the image. Most of the fragile WM methods perform
the embedding of added information in the spatial domain.
Unlike the fragile WM techniques, a semifragile invisible
watermark, such as that proposed in this paper, is designed
to withstand certain legitimate manipulations, i.e., lossy compression, mild geometric changes of images, but is capable of
rejecting malicious changes, i.e., cropping, segment removal,
and so on. Furthermore, the semifragile approaches are generally processed in the frequency domain.
Frequency-domain WM methods are more robust than the
spatial-domain techniques [6]. In practical video storage and
distribution systems, video sequences are stored and transmitted in a compressed format, and during compression the image
is transformed from spatial domain to frequency domain.
Thus, a watermark that is embedded and detected directly in
the compressed video stream can minimize computationally
demanding operations. Therefore, working on compressed
rather than uncompressed video is beneficial for practical WM
applications.
B. Watermark Implementations-Hardware Versus Software
A WM system can be implemented on either software
or hardware platforms, or some combinations of the two.
In software implementation, the WM scheme can simply be

implemented in a PC environment. The WM algorithm’s operations can be performed as machine code software running on
an embedded processor. By programming the code and making
use of available software tools, it can be easy to design and
implement any WM algorithm at various levels of complexity.
Over the last decade, numerous software implementations of
WM algorithms for relatively low data rate signals (such as
audio and image data) have been invented [7]–[11]. While
the software approach has the advantage of flexibility, computational limitations may arise when attempting to utilize
these WM methods for video signals or in portable devices.
Therefore, there is a strong incentive to apply hardware-based
implementation for real-time WM of video streams [12]. The
hardware-level design offers several distinct advantages over
the software implementation in terms of low power consumption, reduced area, and reliability. It enables the addition of
a tiny, fast and potentially cheap watermark embedder as a
part of portable consumer electronic devices. Such devices
can be a digital camera, camcorder, or other multimedia
devices, where the multimedia data are watermarked at the
origin. On the other hand, hardware implementations of WM
techniques require flexibility in the implementation of both
computational and design complexity. The algorithm must be
carefully designed to minimize any susceptibility, as well as
maintaining a sufficient level of security.
C. Past Research on Video Watermarking
In the past few years, research effort has been focused
on efficient WM systems implementation using hardware
platforms. For example, Strycker et al. [12] proposed a wellknown video WM scheme, called just another watermarking
system (JAWS), for TV broadcast monitoring and implemented the system on a Philips’s Trimedia TM-1000 very long
instruction word (VLIW) processor. The experimental results
proved the feasibility of WM in a professional TV broadcast
monitoring system. Mathai et al. [13], [14] presented an
application-specific integrated circuits (ASIC) implementation
of the JAWS WM algorithm using 1.8 V, 0.18-μm complementary metal oxide semiconductor technology for real-time
video stream embedding. With a core area of 3.53 mm2 and
an operating frequency of 75 MHz, that chip implemented
watermarking of raw digital video streams at a peak pixel
rate of over 3 Mpixels/s while consuming only 60 mW power.
A new real-time WM very large scale integration (VLSI)
architecture for spatial and transform domain was presented by
Tsai and Wu [15]. Maes et al. [16] presented the millennium
watermarking system for copyright protection of DVD video
and some specific issues, such as watermark detector location
and copy generation control, were also addressed in their work.
An FPGA prototype was presented for HAAR-wavelet-based
real-time video watermarking by Jeong et al. [38]. A real-time
video watermarking system using DSP and VLIW processors
was presented in [39], which embeds the watermark using
fractal approximation by Petitjean et al. Mohanty et al. [40]
presented a concept of secure digital camera with a built-in
invisible-robust watermarking and encryption facility. Also,
another watermarking algorithm and corresponding VLSI architecture that inserts a broadcasters logo (a visible watermark)

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

291

predefined secret keys. The watermark embedding module
inserts the watermark data into the quantized DCT coefficients
for each video frame according to the algorithm detailed
below. Finally, watermarked DCT coefficients of each video
frame are encoded by the video compression unit which
outputs the compressed frame with embedded authentication
watermark data.
A. Video Compression

Fig. 1.

Overview of the proposed video WM system.

into video streams in real time was presented [41] by the same
group.
In general, digital WM techniques proposed so far for media
authentication are usually designed to be visible or invisiblerobust or invisible-fragile watermarks according to the level
of required robustness [4], [5]. Each of the schemes is equally
important due to its unique applications. In this paper, however, we present the hardware implementation of the invisible
semifragile watermarking system for video authentication. The
motivation here is to integrate the video watermarking system
with a surveillance video camera for real-time watermarking in
the source end. Our work is the first semifragile watermarking
scheme for video streams with hardware architecture.

III. Procedure for the Digital Video
Watermarking System
In this section, a detailed description of the hardware
architecture of the proposed digital video WM system is
provided. Fig. 1 illustrates the general block diagram of the
proposed system that is comprised of four main modules: a
video camera, video compression unit, watermark generation,
and watermark embedding units.
The watermark embedding approach is designed to be
performed in the DCT domain. This holds several advantages.
DCT is used in the most popular stills and video compression
formats, including JPEG, MPEG, H.26x. This allows the integration of both watermarking and compression into a single
system. Compression is divided into three elementary phases:
DCT transformation, quantization, and Huffman encoding.
Embedding the watermark after quantization makes the watermark robust to the DCT compression with a quantization of
equal or lower degree used during the watermarking process.
Another advantage of this approach is that in image or video
compression the image or frames are first divided into 8 × 8
blocks. By embedding the WM specifically to each 8×8 block,
tamper localization and better detection ratios are achieved
[25].
Each of the video frames undergoes 8 × 8 block DCT
and quantization. Then, they are passed to the watermark
embedding module. The watermark generation unit produces a
specific watermark data for each video frame, based on initial

Currently, all popular standards for video compression,
namely MPEG-x (ISO standard) and H.26x formats (ITU-T
standard), use the same basic hybrid coding schemes that apply
the principle of motion-compensated prediction and blockbased transform coding using DCT [18]. MPEG-2 video compression standard has been described below as a representative
case for utilizing the WM algorithm for more advanced DCTbased compression methods.
Generally, a video sequence is divided into multiple group
of pictures (GOP), representing sets of video frames which are
neighboring in display order. An encoded MPEG-2 video sequence is made up of two frame-encoded pictures: intraframes
(I-frame) and interframes (P-frame or B-frame). P-frames
are forward prediction frames and B-frames are bidirectional
prediction frames. Within a typical sequence of an encoded
GOP, P-frames may be 10% of the size of I-frames and Bframes are about 2% of the I-frames.
There can be two types of redundancies in video frames:
temporal redundancy and spatial redundancy. MPEG-2 video
compression technique reduces these redundancies to compress the images.
Within a GOP, the temporal redundancy among the video
frames is reduced by applying temporal differential pulse code
modulation (DPCM). The major video coding standards, such
as H.261, H.263, MPEG-1, MPEG-2, MPEG-4, and H.264, are
all based on the hybrid DPCM/DCT CODEC, which incorporates motion estimation and motion compensation function,
a transform stage and an entropy encoder [19], [20]. It has
been illustrated in Fig. 2 that an input video frame Fn is

compared with a reference frame (previously encoded) Fn−1

and a motion estimation function finds a region in Fn−1 that
matches the current macro-block in Fn . The offset between the
current macro-block position and the chosen reference region
is a motion vector, dk . Based on this dk , a motion compensated
prediction F  n is generated, and it is then subtracted from the
current macro-block to produce a residual or prediction error,
e [20]. For proper decoding this motion vector, dk , has to be
transmitted as well.
The spatial redundancy in the prediction error, e (also called
the displaced frame difference) of the predicted frames, and
the I-frame is reduced by the following operations: each frame
is split into blocks of 8 × 8 pixels that are compressed using
the DCT followed by quantization (Q) and entropy coding
(run-level-coding and Huffman coding) (Fig. 2).
Hardware implementation of MPEG-2 standard is not that
simple. For simplifying the implementation of the video compressor module, the motion JPEG (MJPEG) video encoding
technique rather than the MPEG-2 can also be considered, and
it was chosen in our experiment. The MJPEG standard was

292

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

Fig. 2. Block diagram of hybrid DPCM/DCT coding scheme [20].
Fig. 3.

Structure of the primitive watermark.

Fig. 4.

Block diagram of the proposed watermark generator.

developed by Xinph.org Foundation for Theora encoders
(based on VP3 made by On2 Technologies) [26], [37] to
compete with MPEG encoders efficiently. The encoding process, performed on the raw data, is similar in both MPEG-2
and MJPEG. The only difference is the motion compensated
prediction that is used in MPEG to encode the interframes
(P, B frames).
B. Watermark Generation
Since simple watermark data can be easily cracked, it is essential that the primitive watermark sequence will be encoded
by an encipher. This insures that the primitive watermark data
are secured before being embedded into each video frame.
Currently, there are different approaches to convert a primitive
watermark into a secured pattern [14], [15]. Contradictory
to existing solution approaches, a novel video watermark
generator is proposed. The WM generator generates a secure
watermark sequence for each video frame using a meaningful
primitive watermark sequence and secret input keys.
According to the recommendation by Dittman et al. in [22]
for the feature of a video watermark, a primitive watermark
pattern can be defined as a meaningful identifying sequence for
each video frame. As shown in Fig. 3, the unique meaningful
watermark data for each video frame contain the time, date,
camera ID, and frame serial number (that is related to its
creation). This will establish a unique relationship of the
video stream frames with the time instant, the specific video
camera, and the frame number. Any manipulation, such as
frame exchange, cut, and substitution, will be detected by the
specific watermark. The corresponding N-bit (64-bit) binary
valued pattern, ai, , will be used as a primitive watermark
sequence. This would generate a different watermark for every
frame (time-varying) because of the instantaneously changing
serial number and time.
The block diagram of the proposed novel watermark generator is depicted in Fig. 4. A secure watermark pattern is
generated by performing expanding, scrambling, and modulation on a primitive watermark sequence. There are two digital
secret keys: Key 1 is used for scrambling and Key 2 is used for
the random number generator (RNG) module that generates a
pseudorandom sequence.
Initially, the primitive binary watermark sequence, ai (of
64 bit), is expanded (ai  ) and stored in a memory buffer. It

is expanded by a factor cr . For example, if we use a 64bit primitive watermark sequence then for a 256 × 256-pixels
video frame, cr will be (256 × 256/(8 × 8)) or 1024. This is
done to meet the appropriate length for the video frame.
Scrambling is actually a sequence of XOR operations among
the contents (bytes) of the expanded primitive WM in the
buffer. Key 1 initiates the scrambling process by specifying
two different addresses (Add1 and Add2) of the buffer for
having the XOR operation in between them. The basic purpose
of scrambling is to add complexity and encryption in the
primitive watermark structure. After that, the expanded and
scrambled sequence ci is obtained. The bit size of ci is
the same as the size of the video of frame. Finally, the
expanded and scrambled watermark sequence, ci , is modulated
by a binary pseudorandom sequence to generate the secured
watermark sequence wi . Due to the random nature of the
pseudorandom sequence pi , modulation makes the watermark
sequence ci a pseudorandom sequence and thus difficult to
detect, locate, and manipulate.
A secure pseudorandom sequence pi used for the modulation can be generated by an RNG structure using the Key
2. RNG is based on a Gollmann cascade of filtered feedback
with carry shift register (F-FCSR) cores, presented by Li et
al. [23], [30].
C. Watermark Embedding
Watermark embedding is done only in the I (intra) frames.
It is understandable since B and P frames are predicted from I

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

Fig. 5.

293

Dataflow of the proposed WM algorithm.

frames. If all I, B, P frames are watermarked, the watermarked
data of the previous frame and the one of the current frame
may accumulate, resulting in visual artifacts (called “drift”
or “error accumulation”) during decoding procedures [8]. To
avoid such a major issue, within each GOP of MPEG-2 video
stream, only the I-frame is identified to be watermarked.
The watermarking algorithm should be hardware friendly
in a way that it can be implemented in hardware with high
throughput. For this purpose, one concern for the algorithm
development should be that it must support pipelining architecture so that two or more macroblocks inside a single video
frame or more than one frame can be watermarked simultaneously. This feature will aid in increasing the processing speed
of watermarking.
The watermark embedding approach used in this paper
was originally developed by Nelson et al. [24] and Shoshan
et al. [25]. This WM algorithm, capable of inserting a
semifragile invisible watermark in a compressed image in the
DCT frequency domain, was modified and then applied in
watermarking of a video stream. In general, for each DCT
block of a video frame, N cells need to be identified as
“watermarkable” and modulated by the watermark sequence.
The chosen cells contain nonzero DCT coefficient values and
are found in the mid-frequency range. This algorithm was
detailed by Shoshan et al. [25]. The proposed WM algorithm
along with MPEG-2 video encoding standard is presented as
a flow chart in Fig. 5. This can be described as follows.
1) Split I frame and watermark data into 8 × 8 blocks.
2) For each 8 × 8 block (both watermark data and I frame),
perform DCT, quantization, and zig–zag scan to generate
quantized DCT coefficients.
3) Identify N watermarkable cells for each block and
calculate the modification value for each selected cell.
4) Modify the identified watermarkable DCT coefficients
according to the modification values.
5) Perform inverse DCT and inverse quantization for each
8 × 8 block watermarked coefficient to reconstruct the
original I pixel values.
6) Buffer the reconstructed watermarked I frame.

Fig. 6.

Block diagram of the hardware system architecture.

7) Perform motion estimation for B/P frames to obtain the
motion vector.
8) Using the motion vector and reconstructed watermarked
I frame motion compensation is done.
9) Difference between the motion-compensated prediction
frame and the watermarked reference frame I is the
prediction error.
10) Perform DCT, quantization, and zig–zag scan on the
prediction error.
11) Perform entropy coding for the blocks of the different
frames.
12) Generate compressed and watermark embedded video
steam.
13) To avoid heavy computationally demanding operations
and to simplify the hardware implementation, watermarking can be done with MJPEG standard video compressing unit. Since watermark is only embedded on I
frames, the steps stated above will be the same for the
MJPEG video standard except for the motion estimation
and motion compensation.
IV. Hardware Architecture Design
There exists a wide range of available techniques to implement the peripheral blocks of the proposed video WM

294

Fig. 7.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

Hardware architecture of MJPEG video compressor module with Watermarking unit (Fori is the original frame, Fref is the reference frame).

system. Here, the focus is on simplifying the process as
much as possible, thus making it fit easily within existing
video processing circuitry. At the same time, the security level
and video frame quality are kept high. An overall view of
the hardware implementation for the video WM system is
depicted in Fig. 6. The proposed system architecture includes
six modules: video camera, video compressor, watermark
generator, watermark embedder, control unit, and memory. The
parts implemented by the FPGA are shown in shaded blocks.
The hardware implementation for the complete design is
developed using Verilog HDL. As previously mentioned, all
the processing in the implementation is assumed to be done
on a block basis (such as 8 × 8 pixels). First, the captured
video frame is temporarily stored in a memory buffer, and
then each block of the frame data is continuously processed
by the video compressor unit using DCT and quantization
cores. The watermark embedder block inserts an identifying
message, generated using the watermark generator unit, in the
selected block data within the video frame and sends it to
memory for storage. The control unit is responsible for driving
the operations of the modules and the data flow in the whole
system.
A. Video Compressor
For hardware implementation of the video compressor
module, the MJPEG video encoding technique rather than
the MPEG-2 standard was chosen as because MJPEG offers
several significant advantages over the MPEG-2 technique for
hardware implementation [27]. Even though MJPEG provides
less compression than MPEG-2, it is easy to implement the
MJPEG format in hardware with relatively low computational complexity and low power dissipation. Moreover, it is

available for free usage and modification, and it is now
supported by many video data players, such as VLC media
player [28], MPlayer [29], and so on [30], [37].
Furthermore, using the MJPEG video encoding technique
has no effect on the watermark embedding procedure, since the
intraframes (chosen as watermarkable) in both compression
standards have the same formats. This is due to the same
encoding process that is performed on the raw data. The
difference is motion-compensated prediction, which is used
to encode the interframes. However, as described, only the intraframes are identified to be watermarked, thus the procedure
does not affect the watermark embedding process. Therefore,
for our evaluation purposes, the MJPEG compression standard
was found to be a better alternative for hardware implementation of the video compressor module than the MPEG2 standard. Fig. 7 depicts the hardware architecture of the
MJPEG video compressor.
Depending on the types of the frames, raw video data
are encoded either directly (intraframe) or after the reference
frame is subtracted (interframe). The encoded interframes (P)
are passed to the decoder directly and intraframes (I) are fed
to watermark embedder and after WM embedding they are
also passed to the decoder.
The output of the decoder is combined with the reference
frame data delayed by the bypass buffer so that it arrives at the
same time as the processed one and is fed to the multiplexer.
The multiplexer selects the inverse DCT output (decoded intraframes) or the sum of the inverse DCT output and previous
reference frame (decoded interframes). The multiplexer output
is stored in the SRAM as reconstructed frames. Furthermore,
the decoded interframes are stored in a memory buffer to be
utilized as reference frame for the next frames which are yet

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

Fig. 8.

Hardware architecture of the watermark generator module.

to be encoded. The second branch of the video data from the
encoder block is fed to the watermark embedder module.
B. Watermark Generator
Fig. 8 describes the hardware architecture of the novel watermark generator. The expanding procedure is accomplished
by providing consecutive clock signal so that an expanded
watermark sequence can be generated by reading out the
primitive watermark sequence (ai ) for cr times. Expanded
sequence (ai  ) is stored in memory buffer.
Scrambling is done by using the secret digital key Key1,
which has two parts. The two different parts initiate two
different counters. At each state of the counters two readings
(addressed by Add1 and Add2) from the buffer occur for
having the XOR operation between them. Thus, the scrambled
watermark sequence, ci , is generated. Furthermore, different
digital keys can make the counters start running with different
states and generate different corresponding addresses so that
we can get different patterns of ci .
A secure pseudorandom sequence pi , generated by the
proposed Gollman cascade filtered feedback with carry shift
register RNG [23], seeded with secret key Key2, is used to
modulate the expanded and scrambled watermark sequence ci .
Finally, the generated secure watermark data wi is embedded
into the video stream by the watermark embedder.
C. Watermark Embedder
A schematic view of the hardware architecture for the
watermark embedder unit is presented in Fig. 9. As described
by Shoshan et al. [25], the watermark embedder works in
two phases (calculation and embedding). When considering
a cycle of embedding the watermark in one 8 × 8 block
of pixels, each phase takes one block cycle. Two blocks
are processed simultaneously in a pipelined manner so that
the embedding process only requires one block cycle. As
the number of cells to be watermarked (N) in an 8 × 8
block increases, the security robustness of the algorithm also
increases. But such an increase reduces the video frame quality
because of the reduction in the originality of the video frame.
Simulation results show that even for N as low as 2, the
performance like detection ratio or peak signal-to-noise ratio
(PSNR) is satisfactory. A block that produces less than N
cells is considered to be unmarked and disregarded. Only

295

Fig. 9. Hardware architecture of the watermark embedder module designed
by Shoshan et al. [25].

blocks that are distinctively homogeneous and have low values
for high-frequency coefficients are problematic. The details
of the architecture for the watermark embedder module were
presented by Shoshan et al. [25].
D. Control Unit
The control unit generates control signals to synchronize
the overall process in the whole system. As shown in Fig. 10,
it is implemented as a finite state machine (FSM) with four
main states.
1) S0: In this initial state, all the modules stay idle till
the signal Start is set to high. Depending on the Select
signal, different processing steps are applied to the video
frames. When Select is 1, the control state will move to
state S1 for intraframes or directly jump to the state S2
for interframes if Select is 0.
2) S1: In this state, watermark embedding is performed.
The intraframe blocks read from memory and the generated watermark sequence are added together by activating the watermark embedder module in this state.
Once the watermarking is completed for the block, Blk
signal will be “1” and the FSM moves to state S2.
3) S2: In the state S2, the watermarked intraframe data
from the watermark embedder module or the unmodified
interframe sequences from the memory are encoded. The
signal Blk remains “0” until the encoding of the current
block is completed. When finished, the encoded and
watermarked blocks are fed to the next state S3.
4) S3: In this stage, the watermarked and compressed video
frame blocks are written back to the memory. When all
the blocks of a frame are encoded the control signal Img
changes to “1” and the FSM goes back to the state S0
for considering the next frame. If encoding of all the
blocks of the current frame is not finished, the system
goes back to the state S1 or S2 depending on the type
of the current frame (interframe or intraframe).
E. FPGA-Based Prototyping
Each module in the proposed digital video WM system,
including the MJPEG compressor, watermark generator and
watermark embedder has been implemented and tested individually, and then integrated together to obtain the final system
architecture. The proposed architecture was first modeled in

296

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

For each video stream, a comparison was performed between two sets of experimental results: original video stream
versus MJPEG video stream and original video data versus
watermarked video stream. The comparisons were quantified
using the standard video quality metric: PSNR, which is a
well-known quantitative measure in multimedia processing
used to determine the fidelity of a video frame and the amount
of distortion found in it, as suggested by Piva et al. [3] and
Strycker et al. [12]. The PSNR, measured in decibels (dB), is
computed using


255
PSNR = 10 log10
(1)
MSE
MSE =

M−1 N−1
2
1  
f (m, n) − k (m, n)
MN m=0 n=0

(2)

where 255 is the maximum pixel value in the grey-scale image
and MSE is the average mean-squared error, as defined in (2).
Here, f and k are the two compared images, the size of each
being M × N pixels (256 × 256 pixels in our experiment).
B. Analysis of Experimental Results
Fig. 10.

State diagram of the controller for WM system.
TABLE I
FPGA Synthesis Report

Components

Logic
Cells

Memory
Bits

MJPEG video
compressor
Watermark
generator
Watermark
embedder
Overall architecture

8822

13 540

Initial
Latency
(cycles)
188

309

512

64

132

744

64

9263

14 796

320

Power
Consumption
260 mW
10 mW

270 mW

Verilog HDL and the functional simulation of the HDL design
was performed using the Mentor’s ModelSim tool. Finally,
the system design was synthesized to Altera Cyclone EP1C20
FPGA device using Altera Quartus II design software. The
block diagram of the FPGA implementation of the whole
system is shown in Fig. 11. The synthesis results provide the
hardware resources usage of the units, as shown in Table I.

The results of the experiment done with the implemented
system are presented in Fig. 12, which contains the original
sample frames, the compressed frames, and the watermarked
frames. Here, the presented results are achieved for the case
that only two DCT coefficients in each block are changed.
In terms of PSNR, the quality of the watermarked frames
is maintained above ∼35 dB, measured consistently with no
visually perceptible artifacts. Moreover, the quality of the
video with watermark is comparable to the one generated by
a software-based algorithm.
Three sets of video GOPs consisting of three frames (one
I and two P frames) were tested (for both static and dynamic
scenes). As described above, the interframes are encoded
based on the intraframes, and thus the PSNR values of
the intraframes are expected to be higher than those of the
interframes, as demonstrated in Fig. 13(a) and (b). That is why
we get the “hill” behavior at the first, fourth, and seventh fame
that are the I-frames. On the other hand, watermark embedding
did not degrade the quality of the video streams compared to
the compressed video without watermark embedded (Fig. 13).
Therefore, the proposed hardware implementation of video
watermarking system features minimum degradation of video
quality.
C. Performance Analysis

V. Experimental Results
A. Methodology for Verification
In order to evaluate the performance of the hardware-based
video WM system properly, the algorithm was tested with
two representative grey-scale video clips: a “dynamic” scene,
which has significant high-frequency patterns and a “static”
scene, which has several homogenous (low spatial frequency)
areas. The video streams at a rate of 25 frames/s (f/s) and
256 × 256 pixels/frame were captured by a surveillance video
camera.

The performance of the overall FPGA implementation was
evaluated in terms of hardware cost, power consumption,
processing speed, and security issues.
1) Hardware Cost and Power Consumption: The hardware
resources used by the different modules are given in Table I.
The results clearly indicate that the addition of the watermark generator and embedder modules caused only 4.99%
increase in logic cells usage and 9.28% increase in memory
resources consumed with relation to the hardware of the video
compressor.

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

Fig. 11.

FPGA-based implementation of the proposed video WM system.

Fig. 12.

Examples of watermarked video streams. (a) Original video frame. (b) MJPEG video frame. (c) Watermarked video frame.

The combined system would easily fit in the original FPGA
device. In general, any device that is large enough for the
implementation of the MJPEG video compressor would be
able to accommodate the additional hardware required for the
watermark generator and embedder blocks.
Power consumption is an important concern in hardware
implementation of a VLSI system. As this is a constraint
on the system it needs to be kept low as much as possible. Using the built-in power analyzer of Altera Quartus II
the power consumption of our WM system was measured
and it was found that the MJPEG video compressor with
the watermarking unit consumes 270 mW power, whereas

297

the MJPEG video compressor itself consumes 260 mW
power.
So with an addition of 10 mW power the video watermarking unit (watermark generator and watermark embedder) can be integrated into the MJPEG video compressor
unit.
2) Processing Speed: The data of each frame are processed
macroblock (8×8) wise. Each macro-block first passes through
the DCT module, and then through the WM embedder and
inverse DCT module, respectively. From the timing diagram
of Fig. 14, the processing time of a single I frame can be
determined.

298

Fig. 13.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

PSNR comparisons. (a) Static scene. (b) Dynamic scene.

Fig. 14. Timing diagram of a 256 × 256 frame processing (I frame). MB
stands for macroblock of 8 × 8 pixels.

For the first 8 × 8 macro-block of a frame it takes 320 clock
cycles. Then for all the later 8 × 8 blocks it takes 64 cycles
each (throughput). The WM generator takes 64 clock cycles to
generate the WM data that are completed at the first 64 clock
cycles of the frame processing period, and then WM data are
stored in a WM buffer. Hence, the WM generator does not
contribute to the initial latency of the frame processing period.
The pipelining architecture and parallelism of the designed
system helped in achieving this high throughput after the initial
latency state. Processing of P frame will require less time as
they are not watermarked. If we consider a video frame of
N × M pixel resolution, the time it would take to watermark
one frame of the video is defined by




MN
1
T = Latency +
.
− 1 Throughput
clockfrequency
8×8
(3)
For the present case, in which real-time video streams of
size 256 × 256 pixels/frame are used, the number of cycles
it would take to watermark one frame would be 65 792 clock
cycles. Thus, the processing time it takes to watermark each
frame is 1.6 ms (607 f/s) at a clock frequency of 40 MHz. For
video streams of 640 × 480 pixels/frame, the processing time
would be 7 ms/frame (130 f/s). This means that if this watermarking system is employed in practical applications (such as
a surveillance video system) with an input video of 30 frame/s,
then the implemented watermark system can watermark the

video stream in real time. The reason of this high frame rate is
the pipelining architecture of the watermarking system along
with the video encoder. Another reason is that the encoded
video is of MJPEG standard which requires less processing
time than MPEG standard. If the proposed system is integrated
with an MPEG encoder then the processing time will be higher
only because of the MPEG encoding process, as the watermark
embedding process will remain the same.
3) Robustness Analysis: As stated before, the WM
system, designed in this paper, is for invisible semifragile watermark that is allowed to withstand certain legitimate manipulations, i.e., lossy compression, mild geometric
changes of images, but is capable of rejecting malicious
changes, i.e., cover-up attack, cropping segment removal, and
so on.
To prove the robustness of the watermark, two sample video
sequences were embedded with watermark according to the
proposed algorithm. The cover-up attack is applied to the
watermarked sample video. The tampered video is analyzed
by the watermark detector, which outputs a detection map. The
detection map is used to indicate which blocks of the image
are suspected as inauthentic.
The results of the tamper detection of the video sequences
are presented in Figs. 15 and 16. Only one frame of video
samples is shown here as an example. Following tampering
was done in the video sequence-1.
1) The book in the hand of the person was removed by
copying the contents of adjacent blocks onto the blocks
where the book was in the original frame. To an innocent
observer the original existence of the book in the video
frames is visually undetectable.
2) The segment containing the electrical outlet on the
wall under the white board was also removed. Hence
the electrical outlet was also undetectable in the video
frames.
In video sequence-2, the portion of the white horse was
covered up by a black horse, so for an innocent observer
it seems that there was no white horse in the frame. The

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

299

Fig. 15.

Tamper detection of video sample-1. (a) Original video. (b) Watermarked video. (c) Tampered video. (d) Detected video.

Fig. 16.

Tamper detection of video sample-2. (a) Original video. (b) Watermarked video. (c) Tampered video. (d) Detected video.
TABLE II
Comparisons With Other Video WM Chips

Research
Works

Design Type

Type of WM

Video
Processing
Standard Domain/Method

Chip Statistics
Logic Cells Clock Frequency and
(kilogates) Processing Speed
N/A
100 MHz

Strycker et al. [12]

DSP board

Invisible-Robust



Spatial

Mathai et al. [14]

Custom IC-0.18 μm

Invisible-Robust



Wavelet

N/A

Spatial
Spatial
Fractal

N/A
17, 14
N/A

Tsai and Wu [15]
Custom IC
Robust
RawVideo
Maes et al. [16]
FPGA Custom IC
Robust
MPEG
Petitjean et al. [39] FPGA Board DSP Board Invisible-Robust MPEG
Mohanty et al. [41]

FPGA

Visible

MPEG-4

DCT

28.3

This Paper

FPGA

Semifragile

MJPEG

DCT

9.263

detection map successfully illustrated the blocks of the tampered video frame, which are suspected as inauthentic.
It is important to understand that the proposed WM system
embeds the watermark into MJPEG video frames and enables
the detection of tampered video frames. The detection is
done in software by using the same algorithm utilized for the
watermark embedding process. Both modifications are easily
noticed using the detection map created by the watermark
detector and presented in Figs. 15(d) and 16(d).
4) Security Issues: The encryption of the watermark is
mainly dependent on the statistical property of the pseudorandom sequence generated by the random number generator
(RNG) module, since the primitive watermark is modulated
by the pseudorandom sequence.

75 MHz at 30 f/s
(320 × 320)
N/A
N/A
50 MHz takes 6 μs
250 MHz takes 118 μs
100 MHz at 43 f/s
(320 × 240)
40 MHz at 607 f/s (256 × 256)
40 MHz at 130 f/s (640 × 480)

PSNR
N/A
40 dB
N/A
N/A
N/A
Around 30 dB
44 dB

Hence, it is necessary that the pseudorandom sequence
should have the statistical property as close as possible to a
true random sequence [23]. A true random binary sequence
has equal distribution of 1s and 0s. Another indication of the
randomness of the sequence is the autocorrelation values of
the sequence. This feature is crucial for the resistance of the
sequence to correlation attacks [23].
In order to evaluate the statistical quality of the pseudorandom number sequence generated by the Gollmann cascade FFCSR RNG module, designed in [23], two tests were carried
out. First, a comparison between the autocorrelation values
of the proposed Gollman Cascade F-FCSR RNG and F-FCSR
RNG (which is well studied and known to have good statistical
property [34]) with a similar size, was done. The second

300

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

evaluation was done using the statistical test suite available
through the National Institute of Standards and Technology
(NIST) [35]. NIST provides a comprehensive tool to verify
the statistical quality of a pseudorandom sequence through
various tests. These tests check the properties of the sequence
and compare the results with the expected values of a true
random sequence [35]. The tests were all successful and
proved that the proposed RNG meets the standards compared
to other published RNGs, while providing a simple design
methodology [23].
5) Comparison With Existing Research: The proposed
hardware-based video WM authentication features minimum
video frame quality degradation with no visually perceptible
artifacts and high PSNR values. Moreover, the results are
comparable to that generated by software-based algorithms.
However, the complexity of the proposed algorithm is lower
since only the intraframes are modified and the results shown
are achieved with only two DCT coefficients in each 8 × 8
block being changed. Furthermore, the watermark embedding
that is designed to be performed directly in the compressed
video streams minimizes computationally demanding operations.
Table II presents a comparative perspective with other
published hardware-based video WM systems. The proposed
WM system is the first that employs an invisible semifragile
watermark approach in the frequency domain (DCT) for video
streams with fewer logic gates and higher processing speed
compared to other works, such as in [12]–[16], [39], and [41].
VI. Conclusion
A. Summary and Conclusion
Design of the hardware architecture of a novel digital video
watermarking system to authenticate video stream in real time
was presented in this paper. FPGA-based prototyping for the
hardware architecture was developed. The proposed system
was suitable for implementation using an FPGA and can be
used as a part of an ASIC. In the current implementation,
FPGA was the simple and available way of the proof-ofconcept. The implementation made integration to peripheral
video (such as surveillance cameras) to achieve real-time
image data protection. The aim of this paper was to achieve
three objectives.
First, to propose a new HW architecture of a digital watermarking system for video authentication and making it
suitable for VLSI implementation. Second, to ensure that the
watermarking algorithm achieves a certain level of security
to withstand certain potential threats. Third, to make the
watermarking system suitable for a real time video, which
can be easily adapted with commonly used digital video
compression standards with minor video frame degradation.
Contradictory to existing solutions, where robust WM algorithms were mainly used, a semifragile WM system for video
authentication was developed in this paper. The proposed
watermark system was capable of watermarking video streams
in the DCT domain in real time. It was also demonstrated that
the designed system was capable of achieving the required
security level with minor video frame quality degradation.

B. Future Research
Future research should concentrate on applying the watermarking algorithm to other modern video compression
standards, such as MPEG-4/H.264, so that it can be utilized
in various commercial applications as well. Embedding the
watermark information within high resolution video streams
in real time is another challenge.
Acknowledgment
The authors would like to thank Dr. E. Pecht, Technologies
and Beyond, University of Calgary, Calgary, AB, Canada, for
his constructive suggestions and helpful advice.
References
[1] V. M. Potdar, S. Han, and E. Chang, “A survey of digital image
watermarking techniques,” in Proc. IEEE Int. Conf. Ind. Informatics,
Aug. 2005, pp. 709–716.
[2] A. D. Gwenael and J. L. Dugelay, “A guide tour of video watermarking,”
Signal Process. Image Commun., vol. 18, no. 4, pp. 263–282, Apr. 2003.
[3] A. Piva, F. Bartolini, and M. Barni, “Managing copyright in open
networks,” IEEE Trans. Internet Comput., vol. 6, no. 3, pp. 18–26, May–
Jun. 2002.
[4] Y. Shoshan, A. Fish, X. Li, G. A. Jullien, and O. Yadid-Pecht, “VLSI
watermark implementations and applications,” Int. J. Information Technol. Knowl., vol. 2, no. 4 pp. 379–386, Jun. 2008.
[5] X. Li, Y. Shoshan, A. Fish, G. A. Jullien, and O. Yadid-Pecht, “Hardware
implementations of video watermarking,” in International Book Series
on Information Science and Computing, no. 5. Sofia, Bulgaria: Inst. Inform. Theories Applicat. FOI ITHEA, Jun. 2008, pp. 9–16 (supplement
to the Int. J. Inform. Technol. Knowledge, vol. 2, 2008).
[6] I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread
spectrum watermarking for multimedia,” IEEE Trans. Image Process.,
vol. 6, no. 12, pp. 1673–1687, Dec. 1997.
[7] S. P. Mohanty. (1999). Digital Watermarking: A Tutorial
Review [Online]. Available: http://www.linkpdf.com/download/dl/
digital-watermarking-a-tutorial-review-.pdf
[8] F. Hartung and B. Girod, “Watermarking of uncompressed and compressed video,” IEEE Trans. Signal Process., vol. 66, no. 3, pp. 283–302,
May 1998.
[9] T. L. Wu and S. F. Wu, “Selective encryption and watermarking of
MPEG video,” in Proc. Int. Conf. Image Sci. Syst. Technol. (CISST),
Jun. 1997, pp. 0–9.
[10] A. Shan and E. Salari, “Real-time digital video watermarking,” in Proc.
Dig. Tech. Papers: Int. Conf. Consumer Electron., Jun. 2002, pp. 12–13.
[11] L. Qiao and K. Nahrstedt, “Watermarking methods for MPEG encoded
video: Toward resolving rightful ownership,” in Proc. IEEE Int. Conf.
Multimedia Comput. Syst., Jun. 1998, pp. 276–285.
[12] L. D. Strycker, P. Termont, J. Vandewege, J. Haitsma, A. Kalker,
M. Maes, and G. Depovere, “Implementation of a real-time digital
watermarking process for broadcast monitoring on Trimedia VLIW
processor,” Proc. Inst. Elect. Eng. Vision, Image Signal Process., vol.
147, no. 4, pp. 371–376, Aug. 2000.
[13] N. J. Mathai, A. Sheikholesami, and D. Kundur, “Hardware implementation perspectives of digital video watermarking algorithms,” IEEE Trans.
Signal Process., vol. 51, no. 4, pp. 925–938, Apr. 2003.
[14] N. J. Mathai, A. Sheikholesami, and D. Kundur, “VLSI implementation
of a real-time video watermark embedder and detector,” in Proc. Int.
Symp. Circuits Syst., vol. 2. May 2003, pp. 772–775.
[15] T. H. Tsai and C. Y. Wu, “An implementation of configurable digital
watermarking systems in MPEG video encoder,” in Proc. Int. Conf.
Consumer Electron., Jun. 2003, pp. 216–217.
[16] M. Maes, T. Kalker, J. P. Linnartz, J. Talstra, G. Depoyere, and J.
Haitsma, “Digital watermarking for DVD video copy protection,” IEEE
Signal Process. Mag., vol. 17, no. 5, pp. 47–57, Sep. 2000.
[17] X. Wu, J. Hu, Z. Gu, and J. Huang, “A secure semifragile watermarking for image authentication based on integer wavelet transform with
parameters,” in Proc. Australian Workshop Grid Comput. E-Research,
vol. 44. 2005, pp. 75–80.
[18] Information Technology: Generic Coding of Moving Pictures and Associated Audio Information, ISO/IEC 13818-2:1996(E), Video International Standard, 1996.

ROY et al.: HARDWARE IMPLEMENTATION OF A DIGITAL WATERMARKING SYSTEM

[19] K. Jack, Video Demystified: A Handbook for the Digital Engineer, 2nd
ed. Eagle Rock, VA: LLH Technology Publishing, 2001.
[20] I. E. G. Richardson, H.264 and MPEG-4 Video Compression. Chichester,
U.K.: Wiley, 2003.
[21] F. Bartolini, M. Barni, A. Tefas, and I. Pitas, “Image authentication
techniques for surveillance applications,” Proc. IEEE, vol. 89, no. 10,
pp. 1403–1418, Oct. 2001.
[22] J. Dittmann, T. Fiebig, R. Steinmetz, S. Fischer, and I. Rimac, “Combined video and audio watermarking: Embedding content information
in multimedia data,” in Proc. SPIE Security Watermarking Multimedia
Contents II, vol. 3971. Jan. 2000, pp. 455–464.
[23] X. Li, Y. Shoshan, A. Fish, and G. A. Jullien, “A simplified approach
for designing secure random number generators in HW,” in Proc. IEEE
Int. Conf. Electron. Circuits Syst., Aug. 2008, pp. 372–375.
[24] G. R. Nelson, G. A. Jullien, and O. Yadid-Pecht, “CMOS image sensor
with watermarking capabilities,” in Proc. IEEE Int. Symp. Circuits Syst.
vol. 5. May 2005, pp. 5326–5329.
[25] Y. Shoshan, A. Fish, G. A. Jullien, and O. Yadid-Pecht, “Hardware
implementation of a DCT watermark for CMOS image sensors,” in Proc.
IEEE Int. Conf. Electron. Circuits Syst., Aug. 2008, pp. 368–371.
[26] Theora.org. (2012, Mar. 8) [Online]. Available: http://www.theora.org/
[27] A. Filippov, “Encoding high-resolution Ogg/Theora video with reconfigurable FPGAs,” Xcell J. (Plugging into High-Volume Consumer
Products), no. 53, pp. 19–21, 2005 [Online]. Available: http://www.
xilinx.com/publications/archives/xcell/Xcell53.pdf
[28] (2012, Mar. 8) [Online]. Available: http://www.videolan.org/vlc/features.
php?cat=video
[29] (2011, Apr. 25) [Online]. Available: http://www.mplayerhq.hu/design7/
info.html
[30] (2012, Jan. 13) [Online]. Available: http://wiki.xiph.org/index.php/
TheoraSoftwarePlayers
[31] B. Schneier, Applied Cryptography, 2nd ed. New York: Wiley, 1996.
[32] V. M. Potdar, S. Han, and E. Chang, “A survey of digital image
watermarking techniques,” in Proc. 3rd IEEE Int. Conf. Ind. Informatics,
2005, pp. 709–716.
[33] F. Petitcolas, R. J. Anderson, and M. G. Kuhn, “Information hiding: A
survey,” Proc. IEEE, vol. 87, no. 7, pp. 1062–1078, 1999.
[34] F. Arnault, T. Berger, and A. Necer, “A new class of stream ciphers
combining LFSR and FCSR architectures,” in Proc. Adv. Cryptology
INDOCRYPT, LNCS 2551. 2002, pp. 22–33.
[35] NIST. (2012, Jun. 19). A Statistical Test Suite for the Validation of
Random Number Generators and Pseudo Random Number Generators
for Cryptographic Applications [Online]. Available: http://csrc.nist.gov/
groups/ST/toolkit/rng/documentation− software.html
[36] (2012, Jul. 28) [Online]. Available: http://en.wikipedia.org/wiki/
Digital− watermarking
[37] (2012, Apr. 16) [Online]. Available: http://en.wikipedia.org/wiki/
Motion− JPEG
[38] Y.-J. Jeong, K.-S. Moon, and J.-N. Kim, “Implementation of real time
video watermark embedder based on Haar wavelet transform using
FPGA,” in Proc. 2nd Int. Conf. Future Generation Commun. Networking
Symp., 2008, pp. 63–66.
[39] G. Petitjean, J. L. Dugelay, S. Gabriele, C. Rey, and J. Nicolai, “Towards
realtime video watermarking for systems-on-chip,” in Proc. IEEE Int.
Conf. Multimedia Expo, vol. 1. 2002, pp. 597–600.
[40] S. P. Mohanty, “A secure digital camera architecture for integrated realtime digital rights management,” J. Syst. Architecture, vol. 55, nos. 10–
12, pp. 468–480, Oct.–Dec. 2009.
[41] S. P. Mohanty and E. Kougianos, “Real-time perceptual watermarking
architectures for video broadcasting,” J. Syst. Softw., vol. 84, no. 5, pp.
724–738, May 2011.
[42] S. Saha, D. Bhattacharyya, and S. K. Bandyopadhyay, “Security on
fragile and semifragile watermarks authentication,” Int. J. Comput.
Applicat., vol. 3, no. 4, pp. 23–27, Jun. 2010.
Sonjoy Deb Roy received the B.Sc. degree
in electrical and electronic engineering from the
Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2009. Currently, he
is pursuing the M.Sc. degree with the Integrated
Sensors Intelligent System Laboratory, Department
of Electrical and Computer Engineering, University
of Calgary, Calgary, AB, Canada.
His current research interests include hardware
implementation of secured digital watermarking systems for image and video authentication.

301

Xin Li received the B.Eng. degree in electrical engineering from Nantong
University, Nantong, China, and the M.Sc. degree from the Integrated Sensors
Intelligent System Laboratory, University of Calgary, Calgary, AB, Canada,
in 2010.
His current research interests include digital watermarking system design
for image and video.

Yonatan Shoshan received the B.Sc. degree in electrical engineering from
Ben-Gurion University, Beer-Sheva, Israel, in 2007, and the M.Sc. degree
from the ATIPS Laboratory, University of Calgary, Calgary, AB, Canada, in
2009.
He is currently with Texas Instruments, Inc., Raanana, Israel. His current
research interests include smart CMOS image sensors and watermarking.

Alexander Fish (M’06) received the B.Sc. degree
in electrical engineering from the Technion-Israel
Institute of Technology, Haifa, Israel, in 1999, and
the M.Sc. and Ph.D. (summa cum laude) degrees
from Ben-Gurion University, Beer-Sheva, Israel, in
2002 and 2006, respectively.
He was a Post-Doctoral Fellow with the ATIPS
Laboratory, University of Calgary, Calgary, AB,
Canada, from 2006 to 2008. In 2008, he joined BenGurion University as a Faculty member with the
Electrical and Computer Engineering Department.
He has authored over 60 scientific papers and patent applications. He has
also published two book chapters. His current research interests include lowvoltage digital design, energy-efficient SRAM, Flash memory arrays, and lowpower CMOS image sensors.
Dr. Fish serves as the Editor-in-Chief for the MDPI Journal of Low
Power Electronics and Applications and as an Associate Editor for the IEEE
Sensors Journal.

Orly Yadid-Pecht (S’90–M’95–SM’01–F’07) received the B.Sc. degree from the Electrical Engineering Department, Technion-Israel Institute of
Technology, Haifa, Israel, and the M.Sc. and D.Sc.
degrees from the Technion-Israel Institute of Technology in 1990 and 1995, respectively.
She was a National Research Council Research
Fellow from 1995 to 1997 in the areas of advanced
image sensors with the Jet Propulsion Laboratory
and the California Institute of Technology, Pasadena.
She joined Ben-Gurion University, Beer-Sheva, Israel, as a member with the Electrical and Electro-Optical Engineering Department in 1997. There she founded the VLSI Systems Center, specializing
in CMOS image sensors. She was affiliated with the ATIPS Laboratory,
University of Calgary, Calgary, AB, Canada, from 2003 to 2005, promoting
the area of integrated sensors. She has been an iCORE Professor with the
Integrated Sensors Intelligent System Laboratory, University of Calgary, since
2009. She has published over 100 papers and patents and has led over a dozen
research projects supported by government and industry. She has co-authored
and co-edited the first book on CMOS image sensors, CMOS Imaging:
From Photo-Transduction to Image Processing, 2004. She also serves as the
Director of the boards of two companies. Her current research interests include
integrated CMOS sensors, smart sensors, image processing hardware, and
micro and biomedical system implementations.
Dr. Yadid-Pecht has served on different IEEE Transactions editorial
boards, has been the General Chair of the IEEE International Conference on
Electronic Circuits and Systems, and is a member of the Steering Committee
of this conference.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close