An adaptive energy-efﬁcient area coverage algorithm for

wireless sensor networks

Javad Akbari Torkestani

⇑

Young Researchers Club, Arak Branch, Islamic Azad University, Arak, Iran

a r t i c l e i n f o

Article history:

Received 3 August 2012

Received in revised form 1 February 2013

Accepted 9 March 2013

Available online 19 March 2013

Keywords:

Area coverage

Minimum weight CDS

Degree-constrained CDS

WSN

Learning automata

a b s t r a c t

The connected dominating set (CDS) concept has recently emerged as a promising

approach to the area coverage in wireless sensor network (WSN). However, the major

problem affecting the performance of the existing CDS-based coverage protocols is that

they aim at maximizing the number of sleep nodes to save more energy. This places a

heavy load on the active sensors (dominators) for handling a large number of neighbors.

The rapid exhaustion of the active sensors may disconnect the network topology and leave

the area uncovered. Therefore, to make a good trade-off between the network connectivity,

coverage, and lifetime, a proper number of sensors must be activated. This paper presents a

degree-constrained minimum-weight extension of the CDS problem called DCDS to model

the area coverage in WSNs. The proper choice of the degree-constraint of DCDS balances

the network load on the active sensors and signiﬁcantly improves the network coverage

and lifetime. A learning automata-based heuristic named as LAEEC is proposed for ﬁnding

a near optimal solution to the proxy equivalent DCDS problem in WSN. The computational

complexity of the proposed algorithm to ﬁnd a

1

1÷

optimal solution of the area coverage

problem is approximated. Several simulation experiments are conducted to show the supe-

riority of the proposed area coverage protocol over the existing CDS-based methods in

terms of the control message overhead, percentage of covered area, residual energy, num-

ber of active nodes (CDS size), and network lifetime.

Ó 2013 Elsevier B.V. All rights reserved.

1. Introduction

A wireless sensor network (WSN) is a multi-hop, infra-

structureless, and self-organized network comprising a

group of small and power-constrained sensors deployed

over a vast region for different purposes such as environ-

ment monitoring, object or target tracking, industry auto-

mation and control, and etc. [1,2]. Although the recent

advances in sensor technology, micro-electromechanical

systems, and wireless communications technology have

greatly promoted the emergence of modern WSNs, they

continue to be networks with constrained resources in

terms of memory, power supply, and processing power

[3–5]. Due to the severe resource limitations in WSNs, cov-

erage is the most fundamental and challenging issue of

these networks focusing on how well the sensors cover

the monitoring region [6–8]. WSN coverage problem aims

to minimize the number of sensor nodes to be activated,

while maintaining the full coverage of the monitoring area

[9]. Besides the WSN, coverage is a well-known classic

problem in computational geometry too. The art gallery

problem [10] in which the cameras are located to monitor

every point in the art gallery, and the ocean area coverage

problem [11] in which the satellites are placed in the orbit

to provide the maximum ocean monitoring also deal with

the coverage problem [9].

Depending on the subject is covered, the coverage prob-

lem can be classiﬁed as area coverage [12,13], barrier (or

path) coverage [14–16], and target (or point) coverage

[13,17]. The area coverage deals with the problem of cov-

ering all the points within the monitoring area. The area

1570-8705/$ - see front matter Ó 2013 Elsevier B.V. All rights reserved.

http://dx.doi.org/10.1016/j.adhoc.2013.03.002

⇑

Tel./fax: +98 861 3422292.

E-mail address: [email protected]

Ad Hoc Networks 11 (2013) 1655–1666

Contents lists available at SciVerse ScienceDirect

Ad Hoc Networks

j our nal homepage: www. el sevi er . com/ l ocat e/ adhoc

coverage problem aiming at minimizing the number of ac-

tive nodes without failing to cover the entire area is the

most common form of the coverage problem [9]. Barrier

coverage is considered as monitoring a boundary region

(or barrier) within the sensor ﬁeld aiming at minimizing

the probability of undetected penetration through the bar-

rier. Barrier coverage is used to detect intruders attempting

to penetrate a protected region. The target coverage prob-

lem intends to cover a set of stationary or moving points

within the sensor ﬁeld [18]. All types of the coverage prob-

lem aim to minimize the required number of sensors for

covering the area, barrier or targets. A fundamental solu-

tion to the coverage problem is to place the sensors at

the predetermined locations of the sensor ﬁeld determinis-

tically. Deterministic sensor placement can be applied only

to a relatively small sensor network deployed in a friendly

environment. However, when a large sensor network is de-

ployed in a hostile, harsh, and hard-to-access ﬁeld, random

sensor deployment, generally scattered from an aircraft,

might be the only choice [18,19]. In random deployment,

to guarantee the complete coverage of the sensor ﬁeld,

the number of sensors that must be scattered is signiﬁ-

cantly more than that is required [18]. Under such circum-

stances, the design of a coverage protocol minimizing the

required number of active sensors signiﬁcantly improves

the performance of the WSNs in terms of energy consump-

tion and network lifetime.

The connected dominating set (CDS) principle has re-

cently emerged as a new solution to the energy-efﬁcient

coverage in WSNs. A CDS of a given graph is a connected

subset of the graph vertices such that every vertex of the

graph is either in the set or adjacent to at least one vertex

of the set. Several approaches have been proposed to solve

the CDS problem. To name just a few, Li et al. [20] and

Alzoubi et al. [21] proposed MIS (maximal independent

set)-based greedy algorithms to construct the CDS. Dai

and Wu [22] and Butenko et al. [23] presented prune-

based heuristics for the CDS problem. In a CDS-based cov-

erage protocol, a virtual backbone covering every point

within the sensor ﬁeld is formed [4,9]. Misra et al. [9] pro-

posed an energy-efﬁcient solution to maintain the cover-

age problem in WSNs. The proposed method preserves

the network connectivity by formation of the network

backbone. The proposed solution aims to cover the area

of interest and minimizing the number of active sensors

too. Wightman and Labrador [24] proposed an approxi-

mate CDS-based solution called A3 to the topology control

problem in WSNs. A3 assumes that the sensors have no

information about the position of the neighbors and so

about the network topology. In this method, the distance

between the nodes is estimated based on the received sig-

nal strength. A3 generally is composed of two processes,

neighborhood discovery process and children selection

process. However, another process called the second

opportunity process is rarely used in special cases. The

residual energy of the child node and its distance from

the parent are two metrics that A3 uses to construct the

CDS-based tree. This selection rule gives the higher priority

to the child nodes with higher energy and farther distance

from the parent node. A3 uses four messages for topology

construction, hello message sent out by the parent, parent

recognition message including the residual energy and sig-

nal strength that is sent back by the children, children rec-

ognition message including the sorted list of all children

and their timeouts sent out by the parent, and sleeping

message that is sent by the active candidate node. Wight-

man and Labrador [24] proposed an extended version of A3

called A3lite that uses only (two) hello and parent recogni-

tion messages for topology construction. A3lite avoids

sending the large size children recognition message. A3

sends a sleep message to all nodes, in the reduced tree

topology, that are within the communication area of the

other nodes. This may cause some points of the area re-

main uncovered when the sensing range is considerably

smaller than the communication range. The same authors

in [6] proposed two solutions called A3Cov and A3CovLite

for the coverage problem of A3. A3Cov ﬁrst checks to see if

an unconnected node is sensing-covered by another active

node. If so, the node is sent directly to the sleeping mode.

This is because it is not needed for connectivity nor cover-

age. Otherwise, the node is asked to stay awake for an extra

period of time. If it receives a sensing coverage message

(indicating that sensing area of the node has been already

covered) from its neighbors before the timeout expires, it

goes to the sleeping mode. Otherwise, the node must re-

main active. A3CovLite [6] is a combination of A3Lite that

reduces the required number of messages as compared to

A3 original, and A3Cov that solves the area coverage prob-

lem in A3. Rizvi et al. [4] proposed a distributed energy-

efﬁcient topology control algorithm referred to as A1 for

connected area coverage in WSNs. Similar to A3 family

protocols, A1 uses the signal strength and residual energy

as the criteria to select the dominator nodes. A1 only uses

the hello message to construct the CDS-based backbone.

A1 constructs the topology in one phase. The starting node

ﬁrst discovers its neighbors. Similarly, the neighbors of the

initiator node discover their neighbors and this process

continues until the complete topology is formed with the

backbone nodes. Like A3Cove, A1 lets the children calculate

and set the timeout value independently.

Literature reviewreveals that there are critical problems

affecting the performance of the existing CDS-based con-

nected area coverage protocols. These protocols generally

aim at covering the sensor ﬁled with the minimumnumber

of active nodes (i.e., constructing the minimum size CDS).

This reduces the energy consumption by turning off a large

number of sensors. On the other side, this signiﬁcantly

shortens the sensor lifetime (even for energetic nodes) be-

cause of a heavy burden on the active sensors for handling a

large number of neighbors while the sensors severely suffer

from the limited energy, and processing power. Exhausting

the energy of the active sensors may leave the area uncov-

ered. Though the redundant active nodes (i.e., large size

CDS) extend the covered area, the overlapped sensing areas

increase the network energy consumption. This itself re-

duces the network lifetime on the other hand. Therefore,

the CDS must be constructed such that a good trade-off be-

tween the coverage (covered area) and network lifetime is

made. In this paper, the degree-constrained minimum-

weight version of the CDS, so called DCDS, is presented to

alleviate the above mentioned problems with the CDS-

based area coverage protocols. DCDS is the CDS having

1656 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666

the minimum weight subject to a predeﬁned degree-con-

straint. The weight associated with each node is deﬁned

as the inverse of its residual energy. Therefore, the DCDS

maximizes the network lifetime by selection of the sensors

with the maximum residual energy. A DCDS is a CDS in

which no node has a degree greater than a predeﬁned de-

gree-constraint. Therefore, by the proper choice of the de-

gree-constraint the DCDS is able to make a trade-off

between the percentage of the covered area and the net-

work lifetime. This paper proposes a learning automata-

based heuristic called LAEEC (short for learning automata-

based energy-efﬁcient coverage protocol) to construct the

DCDS in the WSNs. The computational complexity of the

proposed algorithmto ﬁnd a

1

1÷

optimal solution of the area

coverage problem is approximated. Extensive simulation

experiments are performed to show the performance of

the proposed area coverage algorithm. The obtained results

show the superiority of the proposed algorithm over the

best existing methods in terms of the control message over-

head, percentage of covered area, residual energy, number

of active nodes (CDS size), and network lifetime.

The rest of the paper is organized as follows. Section 2

brieﬂy reviews the learning automata theory. In Section 3,

the proposed area coverage algorithm is presented. Sec-

tion 4 approximates the time complexity of the proposed

algorithmto ﬁnd a

1

1÷

optimal solution of the area coverage

problem. Section 5 shows the performance of the proposed

algorithmthrough simulation experiments and comparison

with the existing methods. Section 6 concludes the paper.

2. Learning automata theory

A learning automaton [25,26] is an adaptive decision-

making unit that improves its performance by learning

how to choose the optimal action from a ﬁnite set of al-

lowed actions through repeated interactions with a ran-

dom environment. The action is chosen at random based

on a probability distribution kept over the action-set and

at each instant the given action is served as the input to

the random environment. The environment responds the

taken action in turn with a reinforcement signal. The action

probability vector is updated based on the reinforcement

feedback from the environment. The objective of a learning

automaton is to ﬁnd the optimal action from the action-set

so that the average penalty received from the environment

is minimized. Learning automata have a wide variety of

applications in combinatorial optimization problems

[33,34,38,39], computer networks [36,37,42–45], Grid

computing [30,32,41], and Web engineering [31,35,40].

The environment can be described by a triple {a, b, c},

where a = {a

1

, a

2

, . . ., a

r

} represents the ﬁnite set of the in-

puts, b = {b

1

, b

2

, . . ., b

m

} denotes the set of the values that

can be taken by the reinforcement signal, and c = {c

1

, c

2

,

. . ., c

r

} denotes the set of the penalty probabilities, where

the element c

i

is associated with the given action a

i

. If

the penalty probabilities are constant, the randomenviron-

ment is said to be a stationary random environment, and if

they vary with time, the environment is called a non-sta-

tionary environment. The environments depending on

the nature of the reinforcement signal b can be classiﬁed

into P-model, Q-model and S-model. The environments in

which the reinforcement signal can only take two binary

values 0 and 1 are referred to as P-model environments.

Another class of the environment allows a ﬁnite number

of the values in the interval [0, 1] can be taken by the rein-

forcement signal. Such an environment is referred to as Q-

model environment. In S-model environments, the rein-

forcement signal lies in the interval [a,b].

Learning automaton can be classiﬁed into two main

families [25]: ﬁxed structure learning automata and vari-

able structure learning automata. Variable structure learn-

ing automata are represented by a triple ¸b, a, L), where b is

the set of inputs, a is the set of actions, and L is learning

algorithm. The learning algorithm is a recurrence relation

which is used to modify the action probability vector. Let

a

i

(k) ÷ a and p(k) denote the action selected by learning

automaton and the probability vector deﬁned over the ac-

tion set at instant k, respectively. Let a and b denote the re-

ward and penalty parameters and determine the amount

of increases and decreases of the action probabilities,

respectively. Let r be the number of actions that can be ta-

ken by learning automaton. At each instant k, the action

probability vector p(k) is updated by the linear learning

algorithm given in Eq. (1), if the selected action a

i

(k) is re-

warded by the random environment, and it is updated as

given in Eq. (2) if the taken action is penalized.

p

j

(k ÷1) =

p

j

(k) ÷ a[1 ÷ p

j

(k)[ j = i

(1 ÷ a)p

j

(k) \j –i

_

(1)

p

j

(k ÷1) =

(1 ÷ b)p

j

(k) j = i

b

r÷1

_ _

÷ (1 ÷ b)p

j

(k) \j –i

_

(2)

If a = b, the recurrence Eqs. (1) and (2) are called linear

reward-penalty (L

R÷P

) algorithm, if a ¸ b the given equa-

tions are called linear reward- penalty (L

R÷P

), and ﬁnally

if b = 0 they are called linear reward-Inaction (L

R÷I

). In the

latter case, the actionprobability vectors remainunchanged

when the taken action is penalized by the environment.

A variable action-set learning automaton is an automa-

ton in which the number of actions available at each in-

stant changes with time. It has been shown in [26] that a

learning automaton with a changing number of actions is

absolutely expedient and also -optimal, when the rein-

forcement scheme is L

R÷I

. Such an automaton has a ﬁnite

set of r actions, a = {a

1

, a

2

, . . ., a

r

}. A = {A

1

, A

2

, . . ., A

m

} de-

notes the set of action subsets and A(k) # a is the subset

of all the actions can be chosen by the learning automaton,

at each instant k. The selection of the particular action sub-

sets is randomly made by an external agency according to

the probability distribution W(k) = {W

1

(k), W

2

(k), . . .,

W

m

(k)} deﬁned over the possible subsets of the actions,

where

W

i

(k) = prob[A(k) = A

i

[A

i

÷ A; 1 6 i 6 2

r

÷1[

^

p

i

(k) = prob[a(k) = a

i

[A(k); a

i

÷ A(k)[ denotes the proba-

bility of choosing action a

i

, conditioned on the event that

the action subset A(k) has already been selected and

a

i

÷ A(k) too. The scaled probability ^ p

i

(k) is deﬁned as

^

p

i

(k) =

p

i

(k)

K(k)

(3)

J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666 1657

where K(k) =

a

i

÷A(k)

p

i

(k) is the sum of the probabilities of

the actions in subset A(k), and p

i

(k) = prob[a(k) = a

i

].

The procedure of choosing an action and updating the

action probabilities in a variable action-set learning

automaton can be described as follows. Let A(k) be the ac-

tion subset selected at instant k. Before choosing an action,

the probabilities of all the actions in the selected subset are

scaled as deﬁned in Eq. (3). The automaton then randomly

selects one of its possible actions according to the scaled

action probability vector ^ p(k). Depending on the response

received from the environment, the learning automaton

updates its scaled action probability vector. Note that the

probability of the available actions is only updated. Finally,

the probability vector of the actions of the chosen subset is

rescaled as p

i

(k ÷1) =

^

p

i

(k ÷1) K(k), for all a

i

÷ A(k). The

absolute expediency and e optimality of the method de-

scribed above have been proved in [26].

3. Energy-efﬁcient area coverage algorithm

This paper proposes the degree-constrained minimum-

weight connected dominating set (DCDS) problem for

modeling the energy-efﬁcient coverage problem in WSNs.

The CDS size remains the primary concern of the CDS-

based coverage protocols. On one hand, MCDS (minimum

size CDS)-based coverage results in saving more energy

by maximizing the number of sleep nodes. On the other

hand, MCDS-based coverage rapidly drains the active

nodes and shortens the network lifetime. Though the

redundant active nodes extend the network lifetime, they

increase the network energy consumption. DCDS aims at

making a good trade-off between the network coverage,

lifetime, and energy consumption by an additional con-

straint on the node degree.

3.1. Problem statement

Let G¸V, E, W) be a weighted, connected, and undirected

graph, where V denotes the vertex set, E denotes the edge

set, and W denotes the set of weights associated with the

graph nodes. A CDS of a graph is a connected subset of

the graph vertices such that every vertex of the graph

either is in the set or is the adjacent to at least one vertex

of the set. MCDS is the CDS with the minimum cardinality.

The minimum weight CDS (MwCDS) is the CDS having the

minimumtotal weight. Let D

i

be the degree of vertex v

i

÷ V.

The degree of a given vertex is deﬁned as its number of

neighboring vertices. The degree-constrained connected

dominating set of graph G is a CDS of G subject to D

i

6 d

(for all v

i

÷ V), where d is a positive integer number denot-

ing the degree-constraint. The degree-constrained mini-

mum-weight CDS (DCDS) problem seeks the CDS with

the minimum weight subject to a degree-constraint d.

Let triple ¸N; L; E) denotes the topology graph of a WSN,

where N = ¦n

1

; n

2

; . . .¦ is the set of sensor nodes,

L = ¦l

(n

i

;n

j

)

¦ #N ×N is the set of communication links,

and E = ¦E

n

i

[\n

i

÷ N¦ denotes the set of energies associ-

ated with the sensor nodes. Let E

n

i

be the residual energy

of sensor node n

i

. LAEEC aims to construct the most-stable

energy-efﬁcient sensor network covering the monitoring

area by ﬁnding a near optimal solution to the DCDS prob-

lem, where the weight of each node is deﬁned as its resid-

ual energy level. DCDS seeks for the set of most energetic

connected sensors whose maximum degree is bounded

above by d. Let C = ¦c

1

; c

2

; . . .¦ denotes the set of all possi-

ble degree-constrained CDSs covering the sensing area. c

+

is the optimal solution to the DCDS problem (i.e., the de-

gree-constrained CDS with the minimum weight), if

E

c+

= min

\c

i

÷C

1

min

\n

j

÷c

i

¦E

n

j

¦

_

¸

_

_

¸

_

where E

c+

denotes the energy of optimal degree-con-

strained CDS c

+

; min

\n

j

÷c

i

¦E

n

j

¦ denotes the energy of de-

gree-constrained CDS c

i

subject to constraint d. Energy of

a degree-constrained CDS is deﬁned as the residual energy

level of the most energetic active sensor.

3.2. Degree-constrained CDS-based area coverage algorithm

In this section, a fully distributed learning automata-

based algorithm is proposed for solving the area coverage

problem in WSN by ﬁnding a near optimal solution to the

degree-constrained minimum-weight CDS problem. In this

algorithm, a group of learning automata, named as GoL, is

constituted by equipping each sensor node n

i

with a vari-

able action-set learning automaton A

i

. Duple GoL is deﬁned

as ¸A(k), a(k)), where A(k) = ¦A

i

[\n

i

÷ N(k)¦ denotes the set

of learning automata assigned to the sensor nodes, and

a(k) = {a

i

["A

i

} denotes the set of actions that can be taken

by each learning automaton A

i

. Due to the frequent topol-

ogy changes in WSN, N; L, and E are time-variable parame-

ters. In this paper, these parameters are shown as N(k); L(k),

and E(k) for each instant k. Let a

i

denotes the set of actions

that can be taken by learning automaton A

i

÷ A(k). Each

automaton A

i

chooses the communication links incident

at the corresponding node n

i

as its actions. That is,

a

i

(k) = a

j

i

(k)

¸

¸

¸\l

(n

i

;n

j

)

÷ L(k)

_ _

. GoL is isomorphic to the net-

work topology graph, where the set of learning automata

corresponds to the set of sensor nodes and the action-set

corresponds to the set of communications links. Therefore,

action-set a

i

(k) is time-variable and its number of actions

may change at each instant k. Learning automaton is a

probabilistic learning tool that selects its actions according

to an action probability vector (APV) at random. APV is the

main component of a learning automaton that must be kept

up-to-date. The action probability vector of learning

automaton A

i

is deﬁned as p

i

(k) = p

j

i

(k)

¸

¸

¸\a

j

i

(k) ÷ a

i

(k)

_ _

,

where p

j

i

(k) denotes the choice probability of action a

j

i

at

stage k. In this algorithm, the APV of each learning autom-

aton A

i

is set to the energy level of its neighboring nodes ini-

tially. Let e

n

i

(k) =

\l

(n

i

;n

j

)

÷L(k)

E

n

j

(k) denotes the total energy

level of the neighbors of sensor n

i

at stage k. Therefore, the

probability with which sensor n

i

selects sensor n

j

(i.e., link

l

(n

i

;n

j

)

) is deﬁned as p

j

i

(k) =

En

j

(k)

en

i

(k)

at stage k. This activates

the sensor having the maximum energy level (in each

neighborhood) to cover the sensor ﬁeld.

1658 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666

Let us assume that the sink node n

i

starts the coverage

process. As mentioned earlier, LAEEC is a fully distributed

algorithm that is run at each sensor node independently.

Flowchart of the proposed coverage procedure running at

sensor node n

i

is shown in Fig. 1. The node that is running

the algorithm is called the current node. At each instant k,

each current node n

i

discovers its neighbors and forms its

action-set by sending an ASF (action-set formation) mes-

sage. Each node that receives the ASF message replies it.

The reply message includes the residual energy level of

the node. Current node n

i

forms its action-set based on

the received replies. Due to the network topology changes,

one node may leave (or join to) the other node at each

stage. If link l

(n

i

;n

j

)

breaks at stage k + 1, its corresponding

action (i.e., a

j

i

) must be removed from the action-set of

automaton A

i

(or action a

i

j

from automaton A

j

). Moreover,

the choice probability of the other actions (e.g., a

j

/

i

) must

be updated as p

j

/

i

(k ÷1) = p

j

/

i

(k) 1 ÷ p

j

i

(k)=1 ÷ p

j

i

(k)

_ _ _ _

in

automaton A

i

. When a new link l

(n

i

;n

j

)

is established at stage

k + 1, the choice probability of the new action is initialized

to 1/[a

i

(k + 1)[, and that of the other actions is updated as

p

j

/

i

(k ÷1) = p

j

/

i

(k) [[[a

i

(k ÷1)[ ÷1[=[a

i

(k ÷1)[[.

Let c

k

denotes the degree-constrained CDS that is con-

structed at stage k. Let c

/

k

be the set of sensor nodes cov-

ered by c

k

(or set of dominatees). c

k

is initially set to n

i

,

and c

/

k

is initialized to n

i

and its one-hop neighbors. E

c

k

is

initialized to E

n

i

. Let d

k

denotes the average degree of c

k

.

d

k

is deﬁned (and updated) as

\n

i

÷c

k

D

i

(k)

_ _

=[c

k

[ at each

stage k, where D

i

(k) denotes the degree of node n

i

at stage

k. As shown in Fig. 1, current node n

i

selects one of its ac-

tions at random. Let us assume that action a

j

i

is selected by

node n

i

. Sensor n

i

adds sensor n

j

(corresponding to selected

action a

j

i

) to the constrained CDS c

k

and updates d

k

. Energy

of c

k

(i.e., E

c

k

) is set to min¦E

n

j

; E

c

k

¦. e

n

i

(k) = e

n

i

(k)=D

i

(k) de-

notes the average energy level of the neighbors of node n

i

at stage k. Node n

i

compares the residual energy of sensor

node n

j

with average energy level e

n

i

(k), and average de-

gree d

k

with degree-constraint d. Then, n

i

updates the

internal state of its automaton according to the following

updating rules. If the residual energy level of the node se-

lected by n

i

is higher than the average energy level of the

neighbors of n

i

(i.e., if n

j

is the most energetic neighbor of

n

i

) and d

k

does not exceed degree-constraint d, learning

automaton A

i

rewards the selected action a

j

i

by Eq. (1). If

the energy level of selected node n

j

is lower than the aver-

age energy level e

n

i

(k), and the average degree d

k

is larger

than degree-constraint d

k

, learning automaton A

i

penalizes

the selected action a

j

i

by Eq. (2). Otherwise, the APV of A

i

remains unchanged.

After learning automaton A

i

updates its APV, current

node n

i

sends an ACT (activation) message including de-

gree-constrained CDS c

k

, dominatee set c

/

k

, average degree

d

k

, and CDS energy level E

c

k

to activate the selected sensor

node n

j

. Sensor n

j

checks to see if its ID number is equal to

the receiver ID, as soon as it receives an ACT message. If so,

it adds the IDs of its one-hop neighbors to c

/

k

. If c

/

k

includes

all the network nodes (i.e., if the constructed CDS covers all

the points within the sensor ﬁeld), the current iteration, k,

of the coverage process is over. Otherwise, node n

j

changes

its state to the current node and does the same operations

as node n

i

did. Sensor node n

i

in which the coverage pro-

cess completes, broadcasts a SLP (sleep) message within

the network through backbone c

k

. This message only in-

cludes degree-constrained CDS c

k

and energy level E

c

k

.

Each receiving sensor node n

i

goes to the sleep mode if it

does not ﬁnd its ID in c

k

. Otherwise, it goes to the active

mode for sensing the area. The sensor network composed

of the active nodes covers the monitoring area until the

residual energy of an active sensor node falls down a pre-

deﬁned threshold T

n

i

or one or more active sensor nodes

fail. The active node that ﬁnds its residual energy level

lower than the energy threshold T

n

i

, and the sensor node

that detects the failure of an active node are responsible

for initiating a new coverage process.

Fig. 1. Flowchart of the proposed area coverage procedure running at sensor node n

i

at stage k.

J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666 1659

4. Complexity analysis and convergence results

In this section, the computational complexity of the

proposed area coverage algorithm, LAEEC, is analyzed. To

do so, an upper bound (Lemma 1) and a lower bound (Lem-

ma 2) on the number of iterations of the algorithm for ﬁnd-

ing a

1

1÷

_ _

optimal action at each node n

i

(i.e., learning

automaton A

i

) is approximated, where is the error rate.

Then, it is shown (in Theorem 1) that the time required

for ﬁnding a

1

1÷

_ _

optimal solution to the area coverage

problem is conﬁned between the estimated lower and

upper bounds. Finally, it is proved (in Theorem 2) that

the convergence time of LAEEC to a

1

1÷

_ _

optimal area cov-

erage (i.e.,

1

1÷

c

+

) is bounded to the convergence time of the

network node with the maximum degree.

Theorem 1. Let opt

i

denotes the optimal action that can be

chosen by learning automaton A

i

. If A

i

updates its action

probability vector according to LAEEC, the time required for

ﬁnding a

1

1÷

_ _

opt

i

at node n

i

is

!

E

n+

(k)

e

n

i

(k)

_ _

6 T

i

6 !

E

n+

(k)

e

n

i

(k)

(1 ÷a)

D

i

÷1

_ _

; (4)

where

!(k) =

2

1 ÷ k ÷

[c+[

log

[c+[(1÷k)

1÷a

; (5)

÷ (0, 1) is the error rate, a denotes the learning rate of the

algorithm, [c

+

[ denotes the cardinality of the optimal degree-

constrained CDS, n

⁄

denotes the best (i.e., most energetic)

node that can be chosen by learning automaton A

i

, and D

i

de-

notes the degree of node n

i

.

Proof. Let each learning automaton A

i

updates its action

probability vector p

i

according to the updating rule of LAE-

EC. Let c

+

be the optimal area coverage and a

+

i

(n

+

) denotes

the best action (active sensor) that can be selected by

automaton A

i

(node n

i

). Before stating the proof of this the-

orem, the following two lemmas are discussed. h

Lemma 1. If each automaton A

i

updates its action probability

vector according to the proposed updating algorithm, the

upper bound on the running time for ﬁnding a

1

1÷

opt

i

is

2

1 ÷ k ÷

[c+[

log

[c+[(1÷k)

1÷a

where k Pp

+

i

(1 ÷ a)

D

i

÷1

.

Proof. Lemma 1 aims at computing the worst case running

time of the proposed algorithm. At each node n

i

, the worst

case occurs if the optimal sensor node n

⁄

(the node with

the maximum residual energy satisfying the degree-con-

straint) is chosen before all the other nodes. In this case,

the learning process is subdivided into two distinct phases:

(1) the shrinking phase, and (2) the growing phase. In the

ﬁrst phase, it is assumed that all the other nodes, from

the node with the minimum energy level to the most ener-

getic node are chosen in turn and rewarded before n

i

acti-

vates the optimal node n

⁄

. Therefore, in the worst case, the

choice probability of the optimal node n

⁄

at the end of the

shrinking phase is computed as

p

+

i

(D

i

÷1) Pp

+

i

(D

i

÷2) (1 ÷ a) (6)

where p

+

i

(D

i

÷1) denotes the choice probability of the opti-

mal node n

⁄

(or optimal action a

+

i

) at stage (D

i

÷ 1), D

i

de-

notes the degree of node n

i

, a is the learning rate of the

proposed algorithm, and p

+

i

(D

i

÷1) denotes the choice

probability of the optimal node at the end of the shrinking

phase. Repeatedly substituting the recurrence function

p

+

i

(:) on the right hand side of Inequality (6), we have

p

+

i

(D

i

÷1) Pp

+

i

(1 ÷ a)

D

i

÷1

, where p

+

i

denotes the initial

choice probability of the optimal node n

⁄

. For the sake of

simplicity in notation, p

+

i

(D

i

÷1) is substituted by q

+

i

.

The growing phase starts when the optimal node n

⁄

is

chosen for the ﬁrst time by node n

i

. Since the reinforce-

ment scheme by which the proposed algorithm updates

the probability vectors is L

R÷I

, the conditional expectation

of q

+

i

(k) (i.e., the choice probability of the optimal node at

stage k of the growing phase) remains unchanged when

the other nodes are selected. It increases only when the

optimal node is selected. Therefore, during the growing

phase, the changes in the conditional expectation of q

+

i

(k)

is always non-negative and as follows

q

+

i

(1) =q

+

i

÷a 1÷q

+

i

_ _

q

+

i

(2) =q

+

i

(1) ÷a 1÷q

+

i

(1)

_ _

=q

+

i

(1) (1÷a) ÷a

.

.

.

q

+

i

(k÷1) =q

+

i

(k÷2) ÷a 1÷q

+

i

(k÷2)

_ _

=q

+

i

(k ÷2) (1÷a) ÷a

q

+

i

(k) =q

+

i

(k ÷1) ÷a 1÷q

+

i

(k ÷1)

_ _

=q

+

i

(k÷1) (1÷a) ÷a

(7)

where k denotes the number of times n

i

selects the optimal

node n

⁄

before the following stop condition (derived from

the Bonferroni correction [27] to achieve an error rate low-

er than for the optimal area coverage c

+

) is met.

q

+

i

(k) = 1 ÷

[c

+

[

(8)

where [c

+

[ denotes the cardinality of the optimal area

coverage.

After substituting the recurrence function q

+

i

(k) we have

q

+

i

(k) = q

+

i

(k ÷1) (1 ÷ a) ÷ a

= q

+

i

(k ÷2) (1 ÷ a) ÷ a

_ ¸

(1 ÷ a) ÷ a

= q

+

i

(k ÷2) (1 ÷ a)

2

÷ a (1 ÷ a) ÷ a

= q

+

i

(k ÷3) (1 ÷ a) ÷ a

_ ¸

(1 ÷ a)

2

÷ a (1 ÷ a) ÷ a

= q

+

i

(k ÷2) (1 ÷ a)

3

÷ a (1 ÷ a)

2

÷ a (1 ÷ a) ÷ a

.

.

.

= q

+

i

(1) (1 ÷ a)

k÷1

÷ a (1 ÷ a)

k÷2

÷ ÷ a (1 ÷ a) ÷ a

= q

+

i

(1 ÷ a)

k

÷ a (1 ÷ a)

k÷1

÷ ÷ a (1 ÷ a) ÷ a

Hence, we have

q

+

i

(k) = q

+

i

(1 ÷ a)

k

÷ a (1 ÷ a)

k÷1

÷ ÷ a (1 ÷ a) ÷ a

(9)

After algebraic simpliﬁcations, we have

1660 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666

q

+

i

(k) = q

+

i

(1 ÷ a)

k

÷ a (1 ÷ (1 ÷ a)÷

(1 ÷ a)

2

÷ ÷ (1 ÷ a)

k÷1

)

and

q

+

i

(k) = q

+

i

(1 ÷ a)

k

÷ a

k÷1

i=0

(1 ÷ a)

i

(10)

The second term on the right hand side of Eq. (10) is a

geometric series that sums up to a

1÷(1÷a)

k

1÷(1÷a)

_ _

, where

[1 ÷ a[ < 1. Since the learning rate a ÷ (0, 1), we have

q

+

i

(k) = q

+

i

(1 ÷ a)

k

÷ a

1 ÷ (1 ÷ a)

k

1 ÷ (1 ÷ a)

_ _

(11)

and

q

+

i

(k) = q

+

i

(1 ÷ a)

k

÷1 ÷ (1 ÷ a)

k

(12)

From Eqs. (8) and (12), we have

q

+

i

(1 ÷ a)

k

÷1 ÷ (1 ÷ a)

k

= 1 ÷

[c

+

[

(13)

and

(1 ÷ a)

k

=

[c

+

[ 1 ÷ q

+

i

_ _ (14)

Taking log

1÷a

of both sides of Eq. (14), we derive

k = log

[c+[ 1÷q

+

i

( )

1÷a

(15)

As mentioned earlier, during the growing phase, q

+

i

remains unchanged when the other nodes are penalized.

Therefore, k does not include the number of times the

other nodes are selected. Let q

+

i

be the choice probability

of the optimal node at the beginning of the growing phase.

After k iterations q

+

i

reaches 1 ÷ . On the other hand, the

choice probability of all the other nodes is initially 1 ÷ q

+

i

and reaches after the same number of iterations. There-

fore, the number of times the other nodes are selected,

before the stop condition given in Eq. (8) is met, is

1 ÷ q

+

i

÷

[c+[

1 ÷ q

+

i

÷

[c+[

k (16)

Let / denotes the total number of iterations required to

satisfy the stop condition. From Eq. (16) we have

/ =

2

1 ÷ q

+

i

÷

[c+[

k

By substituting k from Eq. (15) we have

/ =

2

1 ÷ q

+

i

÷

[c+[

log

[c+[ 1÷q

+

i

( )

1÷a

(17)

From Inequality (7) and Eq. (17), it is concluded that the

time complexity of the LAEEC for ﬁnding a

1

1÷

opt

i

is less

than

2

1 ÷ q

+

i

÷

[c+[

log

[c+[ 1÷q

+

i

( )

1÷a

(18)

where q

+

i

Pp

+

i

(1 ÷ a)

D

i

÷1

, and hence the proof of Lemma

1. h

Lemma 2. If the action probability vector p

i

(of each autom-

aton A

i

) is updated according to the updating rules of LAEEC,

the lower bound to the running time of LAEEC for ﬁnding a

1

1÷

opt

i

is greater than

2

1 ÷ p

+

i

÷

[c+[

log

[c+[ 1÷p

+

i

( )

1÷a

:

Proof. Lemma 2 considers the running time of the pro-

posed algorithm in the best case, when n

i

selects the opti-

mal node n

⁄

before the others. In this case, the learning

process does not include the shrinking phase. Therefore,

p

+

i

denotes the choice probability of the optimal node in

the beginning of the growing phase. Similar to the proof

of Lemma 1, it can be easily proved that the minimum

number of iterations required for satisfying the stop condi-

tion (8) is

2

1 ÷ q

+

i

÷

[c+[

log

[c+[ 1÷q

+

i

( )

1÷a

; (19)

where q

+

i

= p

+

i

, which completes the proof of Lemma 2. h

From Inequalities (18) and (19), it can be concluded that

2

1 ÷ q

+

i

÷

[c+[

log

[c+[ 1÷q

+

i

( )

1÷a

6 T

i

6

2

1 ÷ q

+

i

÷

[c+[

log

[c+[ 1÷q

+

i

( )

1÷a

;

where q

+

i

Pp

+

i

(1 ÷ a)

D

i

÷1

.

As described in Section 3, for each action a

j

i

, the initial

probability p

j

i

is set to

En

j

(k)

en

i

(k)

, where e

n

i

(k) =

(n

i

;n

j

)÷L(k)

E

n

j

(k). Therefore, the initial probability p

+

i

is set to

En+

(k)

en

i

(k)

.

Therefore, we have

!

E

n+

(k)

e

n

i

(k)

_ _

6 T

i

6 !

E

n+

(k)

e

n

i

(k)

(1 ÷a)

D

i

÷1

_ _

;

where

!(k) =

2

1 ÷ k ÷

[c+[

log

[c+[(1÷k)

1÷a

;

which completes the proof of the theorem.

Theorem 2. Let n

/

denotes the network node with the

maximum degree D. The time complexity of the proposed

algorithm for ﬁnding a

1

1÷

optimal solution to the coverage

problem is

!

E

n+

(k)

e

n

/

(k)

_ _

6 T 6 !

E

n+

(k)

e

n

/

(k)

(1 ÷a)

D÷1

_ _

;

where

!(k) =

2

1 ÷ k ÷

[c+[

log

[c+[(1÷k)

1÷a

Proof. As mentioned earlier, the proposed algorithm is

independently run at each node and each leaning automa-

ton locally updates its internal state to converge to the

optimal action. Therefore, node n

/

requires the maximum

number of iterations for ﬁnding

1

1÷

optimal action of

J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666 1661

learning automaton A

/

. On the other hand, from Lemmas

1 and 2, the running time of the proposed algorithm for

ﬁnding

1

1÷

optimal coverage is limited by the upper bound

and lower bound on the running time of the algorithm for

the node with the maximum degree D. Therefore, it is con-

cluded that the time taken by the proposed algorithm for

ﬁnding a

1

1÷

optimal coverage is

!

E

n+

(k)

e

n

/

(k)

_ _

6 T 6 !

E

n+

(k)

e

n

/

(k)

(1 ÷a)

D÷1

_ _

;

where!(k) =

2

1÷k÷

[c+[

log

[c+[(1÷k)

1÷a

, that completes the proof of

Theorem 2. h

5. Experiments

In this section, several simulation experiments are

conducted to show the performance of the proposed

CDS-based area coverage algorithm. The results of the

proposed method are compared with those of three

CDS-based energy efﬁcient area coverage protocol A3

[24], A3CovLite [6], and A1 [4] in terms of control mes-

sage overhead, percentage of covered area, residual en-

ergy, number of active nodes (CDS size), and network

lifetime. In these experiments, the wireless sensor net-

work is setup as follows. The wireless sensor nodes are

uniformly and randomly distributed within a square sen-

sor deployment area of size 150(m) × 150(m) at random.

The number of sensor nodes ranges from 50 to 250 with

increment step 50. The radio transmission range of each

sensor node is set to 20(m), and the sensing range of each

node is set to 10(m). The size of each data packet is 100

bytes. The simulation time of each experiment is 1500

(s). Each sensor node has an omnidirectional antenna

with a ﬁxed radio propagation range. IEEE 802.11 [28]

(Distributed Coordination Function) with CSMA/CA (Car-

rier Sense Multiple Access/Collision Avoidance) is used

as the medium access control protocol, and two ray

ground as the radio propagation model. The maximum

energy level of each sensor node is 2.0(J), and the initial

energy level the sensors is randomly selected from the

uniform distribution deﬁned over interval [1.5(J), 2.0(J)].

The energy model presented by Heinzelman et al. [29]

is used for estimating the amount of energy consumption.

In this energy model that is based on the ﬁrst order radio

model, each sensor node consumes 50

nJ

bit

_ _

to run the

transmitter or the receiver circuitry. Each sensor node

also consumes 100

pJ

bit

=m

2

_ _

for handling the transmit

ampliﬁer. Therefore, the energy amount required for

receiving a k-bit data packet is estimated as

k(bit) 50

nJ

bit

_ _ _ _

= 50k(nJ)

The energy amount that is consumed to transmit a mes-

sage of length k to a destination node located x(m) far from

the transmitter is computed as

k(bit) 50

nJ

bit

_ _ _ _

÷ k(bit) 100

pJ

bit

_

m

2

_ _ _ _

x

2

(m

2

)

= 50k(nJ) ÷100 kx

2

(pJ)

In these experiments, the proposed area coverage algo-

rithm, LAEEC, is conﬁgured as follows. The environment

in which the learning automata perform is assumed to be

P-model. Each learning automaton updates its action prob-

ability vector according to reinforcement scheme L

R÷I

. In

these experiments, LAEEC is calibrated by tuning degree-

constraint d and learning rate a as follows. The covered

area and network lifetime are measured, where degree-

constraint d changes from 2 to 15. The obtained results

show that the best trade-off between the covered area

and network lifetime is made when degree-constraint d

is set to 7. The same experiment is conducted to adjust

the learning rate, where a changes from 0.05 to 0.5. The re-

sults show that LAEEC has the best performance when the

learning rate is set to 0.15. In The energy threshold T

n

i

is

deﬁned as 0:5 E

c

k

. That is, a new coverage process initiates

when the energy level E

c

k

falls to 50% of its initial value. All

timeouts are set to 100 ms. In class A3, the weights are set

as follows: W

E

denoting the weight for the remaining en-

ergy in the node is set to 0.5, and W

D

denoting the weight

for the distance from the parent node is set to 0.5 too,

where W

D

+ W

E

= 1. For A3-based protocols, the timers

are set as t

0

= 1.5, t

1

= 30.0, t

2

= 15.0, and t

3

= 60.0.

5.1. Number of active nodes

This metric is deﬁned as the average number of nodes

that are activated to cover the sensor ﬁeld (i.e., the average

CDS size). This metric implicitly shows the number of

dominators in the CDS. The residual energy of the network

is inversely proportional to the number of active nodes.

Therefore, the energy-efﬁcient protocols try to minimize

the number of active nodes. Fig. 2 shows the number of ac-

tive nodes (dominators in CDS) against the total number of

network nodes. From the results shown in this ﬁgure it can

be seen that A3 has the minimum number of active nodes

as compared to the other protocols. This is because A3

algorithm uses a selection metric giving the priority to

the farther nodes from the parent having higher energy le-

vel. This method considerably reduces the CDS size. As

shown in Fig. 3, this results in the lower coverage rate of

A3. A3CovLite uses extra active nodes to cover the points

of the sensor ﬁeld leaved uncovered in A3. So, it requires

more active nodes than A3. Though A1 provides a higher

Fig. 2. The number of active nodes vs. the total number of network nodes.

1662 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666

coverage rate than A3 and A3CovLite, it suffers from the

many redundant active nodes. This is due to the fact that

A1 forms the reduced topology without any metric desired

for the reduction in the size of the CDS. From the results

shown in Fig. 2, it can be seen that the number of active

nodes in LAEEC is larger than that of A3-based approaches

and smaller than that of A1. In LAEEC, the number of active

nodes is controlled by degree-constraint d. Larger values of

d reduces the number of active nodes and smaller values of

d leads to very large CDS.

5.2. Covered area

This metric shows the percentage of the sensing area

that is covered by the active sensor nodes. The coverage

percentage is a measure of the quality of service (QoS) of

the coverage protocol. Fig. 3 shows the percentage of the

sensor ﬁeld covered by the selected active nodes in differ-

ent algorithms as a function of the network size. From the

results shown in this ﬁgure, it can be seen that the covered

area signiﬁcantly increases as the network density (i.e., the

number of nodes in the network) increases. This is because

the number of scattered nodes within the simulation area

considerably exceeds the number of active nodes required

in optimal deployment. The obtained results depicted in

Fig. 3 also show that A3 provides the minimum area cover-

age as compared to the other protocols. This is due to the

fact that A3 sends a sleep message to the nodes within

the communication area of the other nodes in the reduced

tree topology. This may cause some points of the area re-

main uncovered when the sensing range is smaller than

the communication range. A3CovLite solves the coverage

problem with A3 by sending a node to the sleep mode if

it is sensing-covered by another active node. That is why,

the A3CovLite has a higher rate of area coverage than A3.

Comparing the results shown in Fig. 3, it is observed that

the proposed area coverage protocol LAEEC always covers

the whole monitoring area. This is because, LAEEC ﬁrst

constructs a degree-constrained CDS-based backbone cov-

ering all the network points. Then, it sends all the non-

dominators to the sleep mode. A1 outperforms A3 and

A3CovLite in terms of covered area. This can be due to

using a larger number of active nodes to cover the area.

5.3. Residual energy

The residual energy is deﬁned as the average remaining

energy of the active sensor nodes at the end of each simu-

lation experiment. Fig. 4 shows the average residual energy

of the active sensor nodes as a function of the network size.

From the results shown in this ﬁgure, it is observed that

the average residual energy level of the proposed area cov-

erage algorithm is signiﬁcantly higher than the other

methods. This is due to the fact that the proposed method

makes a good trade-off between the number of active

nodes (required to cover the area) and the amount of en-

ergy consumption in each active node by selection of a

proper degree-constraint. This signiﬁcantly reduces the

number of nodes covering the same points of the area,

while avoids the rapid exhaustion of the active sensors

for handling a huge number of neighbors. The results also

show that A3 has the lowest residual energy level, and

A3CovLite slightly outperforms A3. This is because A3-

based approaches reduce the number of backbone nodes

by activating the farther nodes from the parent node. This

causes a non-uniform distribution of the communication

overhead and places a heavy load on the active nodes.

Therefore, A3-based approaches result in the imbalanced

energy consumption within the network. Comparing the

results of A1 and A3, it can be seen that A1 provides a sig-

niﬁcant higher residual energy level as compared to A3.

This is due to the fact that in A1 protocol the nodes calcu-

late the timeout with the selection criteria resulting in a

balanced virtual backbone.

5.4. Network lifetime

Network lifetime is deﬁned as the average period of

time during which the set of active sensors remain con-

nected. Minimizing the energy consumption and maximiz-

ing the network lifetime are the major concerns of the

design of the coverage protocols. Network lifetime implic-

itly shows the energy-efﬁciency and load balancing of the

coverage protocol. Fig. 5 shows the changes in the network

lifetime as the number of network nodes changes from 50

to 250 with increment step 50. From the results shown in

Fig. 3. The percentage of the covered area vs. the network size.

Fig. 4. The average residual energy of the active nodes as a function of the

number of nodes.

J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666 1663

Fig. 5, it can be seen that for all coverage algorithms, the

network lifetime reduces as the network size increases.

This can be due to the fact that the backbone size grows

and it makes harder evenly distribution of the network

load on the backbone nodes. Comparing the curves de-

picted in Fig. 5, it is observed that A3 has the shortest life-

time and the proposed area coverage algorithm has the

longest lifetime. The main objective of the proposed cover-

age algorithm is to extend the lifetime of the network de-

ployed to monitor the area as much as possible. To do so, it

uses the degree-constrained minimumweight CDS concept

to activate the nodes having the maximum residual energy

level, and to evenly distribute the network load on the ac-

tive nodes. This signiﬁcantly extends the lifetime of the ac-

tive nodes. As mentioned earlier, A3 tries to reduce the

required number of active nodes to cover the area. On

one hand, this reduces the total energy consumption of

the network by keeping a larger number of sensors in sleep

mode. However, on the other hand, the heavy burden

placed on the small set of active nodes drains them sooner.

This signiﬁcantly shortens the network lifetime. A3CovLite

shows a better performance in Fig. 5 as compared to A3. As

shown in Fig. 2, this is achieved by adding redundant active

nodes (dominators) to the CDS. However, as the curves

show in Fig. 4, there is no signiﬁcant gap between the aver-

age residual energy level of A3 and A3CovLite. A1 uses the

largest set of (redundant) active nodes to cover the area.

The results given in Fig. 5 show its superiority over A3

and A3CovLite.

5.5. Control message overhead

In this experiment, the control message overhead is

deﬁned as the number of (extra) control messages re-

quired for coverage (degree-constrained CDS formation)

process. The extra messages are the control messages that

are used to construct the CDS-based backbone (i.e., the

message overhead of the coverage protocol). This metric

is measured as the number of control messages that must

be sent per second. Fig. 6 depicts the control message

overhead of the coverage algorithms vs. the number of

nodes. The results show that LAEEC has the lowest control

message overhead and A3 has the highest one. The results

also reveal that A3CovLite lags far behind A1. The reason

for the highly message overhead of A3 is that this proto-

col uses four messages to construct the CDS backbone.

The message complexity of A3 (in worst case) is 4n,

where n is the number of network nodes. A3 uses a chil-

dren recognition message of size 100 bytes as well as

three other messages of size 25 bytes. A3CovLite only

uses two messages of size 25 bytes. The message com-

plexity of A3CovLite is at most 2n. Therefore, it has a

meaningfully lower message overhead than A3. A1 uses

only one type of message (a hello message of size 25 by-

tes) for CDS formation (having message complexity n).

That is why, A1 outperforms A3 and A3CovLite in terms

of control message overhead. The proposed area coverage

algorithm uses only an activation (ACT) message to con-

struct the CDS structure. ACT is a variable-length message

whose size is in the interval [1; [c

k

[[ bytes. The number of

times this message is exchanged between the active

nodes is [c

k

[. Therefore, the average message complexity

of LAEEC is [c

k

[

2

=2 bytes that is signiﬁcantly lower than

that of A1.

6. Conclusion

Over the past couple of decades, CDS has received a lot

of attention and found many applications in wireless net-

working such as routing, clustering, backbone formation,

and multicasting. CDS has recently emerged as an innova-

tive approach to model the area coverage problem in

wireless sensor networks and several CDS-based area cov-

erage protocols have been proposed. However, the major

problem affecting the performance of the existing CDS-

based coverage protocols is that they aim at maximizing

the number of sleep nodes to save more energy. This im-

poses a heavy burden on the active nodes for handling a

large number of neighbors. The rapid exhaustion of the

active nodes may disconnect the network topology and

leave the area uncovered. This paper proposed a degree-

constrained minimum-weight extension of the CDS prob-

Fig. 5. Network lifetime vs. the number of nodes.

Fig. 6. The control message overhead vs. the number of network nodes.

1664 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666

lem called DCDS to model the area coverage problem in

WSNs. Selection of an optimal degree-constraint for the

DCDS balances the network load on the active nodes

and improves the network coverage, connectivity, and

lifetime. This paper designed a learning automata-based

heuristic called LAEEC for ﬁnding a near optimal solution

to the proxy equivalent DCDS problem in WSN. The com-

putational complexity of the proposed algorithm to ﬁnd a

1

1÷

optimal solution of the area coverage problem is

approximated. Several simulation experiments were per-

formed to show the performance of the proposed area

coverage algorithm. The results show that LAEEC outper-

forms the existing CDS-based coverage protocols in terms

of the control message overhead, percentage of covered

area, residual energy, number of active nodes (CDS size),

and network lifetime.

References

[1] Y. Zeng, C.J. Sreenan, N. Xiong, L.T. Yang, J.H. Park, Connectivity and

coverage maintenance in wireless sensor networks, Journal of

Supercomputing 52 (2010) 23–46.

[2] S. Sengupta, S. Das, M.D. Nasir, B.K. Panigrahi, Multi-objective node

deployment in WSNs: in search of an optimal trade-off among

coverage, lifetime, energy consumption, and connectivity,

Engineering Applications of Artiﬁcial Intelligence (2012). http://

dx.doi.org/10.1016/j.engappai.2012.05.01.

[3] C. Zhu, C. Zheng, L. Shu, G. Han, A survey on coverage and

connectivity issues in wireless sensor networks, Journal of

Network and Computer Applications 35 (2012) 619–632.

[4] S. Rizvi, H.K. Qureshi, S.A. Khayam, V. Rakocevic, M. Rajarajan, A1: an

energy efﬁcient topology control algorithm for connected area

coverage in wireless sensor networks, Journal of Network and

Computer Applications 35 (2012) 597–605.

[5] M.A. Guvensan, A.G. Yavuz, On coverage issues in directional sensor

networks: a survey, Ad Hoc Networks 9 (2011) 1238–1255.

[6] P.M. Wightman, M.A. Labrador, A family of simple distributed

minimum connected dominating set-based topology construction

algorithms, Journal of Network and Computer Applications 34

(2011) 1997–2010.

[7] H.M. Ammari, S.K. Das, Centralized and clustered k-coverage

protocols for wireless sensor networks, IEEE Transactions on

Computers 61 (1) (2012) 118–133.

[8] M. Hefeeda, H. Ahmadi, Energy-efﬁcient protocol for deterministic

and probabilistic coverage in sensor networks, IEEE Transactions on

Parallel and Distributed Systems 21 (5) (2010) 579–593.

[9] S. Misra, M.P. Kumar, M.S. Obaidat, Connectivity preserving

localized coverage algorithm for area monitoring using wireless

sensor networks, Computer Communications 34 (2011) 1484–

1496.

[10] D.T. Lee, A.K. Lin, Computational complexity of art gallery

problems, IEEE Transactions on Information Theory 32 (2) (1986)

276–282.

[11] W.W. Gregg, W.E. Esaias, G.C. Feldman, R. Frouin, S.B. Hooker, C.R.

McClain, R.H. Woodward, Coverage opportunities for global ocean

color in a multimission era, IEEE Transactions on Geoscience and

Remote Sensing 36 (5) (1998) 1620–1627.

[12] C.F. Huang, Y.C. Tseng, A survey of solutions to the coverage

problems in wireless sensor networks, Journal of Internet

Technology 6 (1) (2005) 1–8.

[13] M. Cardei, J. Wu, Energy-efﬁcient coverage problems in wireless ad

hoc sensor networks, Computer Communications 29 (4) (2006) 413–

420.

[14] M.K. Watfa, S. Commuri, Boundary coverage and coverage boundary

problems in wireless sensor networks, International Journal of

Sensor Networks 2 (3) (2007) 273–283.

[15] S. Ram, D. Majunath, S. Iyer, D. Yogeshwaran, On the path coverage

properties of random sensor networks, IEEE Transactions on Mobile

Computing 6 (5) (2007) 494–506.

[16] X. Cheng, D.Z. Du, L. Wang, B. Xu, Relay sensor placement in wireless

sensor networks, Wireless Networks 14 (2008) 347–355.

[17] Z. Fang, J. Wang, Convex combination approximation for the min-

cost WSN point coverage problem, in: Proceedings of the Third

International Conference on Wireless Algorithms, Systems, and

Applications, Dallas, Texas, 2008, pp. 188–199.

[18] B. Wang, H.B. Lim, D. Ma, A survey of movement strategies for

improving network coverage in wireless sensor networks, Computer

Communications 32 (2009) 1427–1436.

[19] A. Ghosh, S.K. Das, Coverage and connectivity issues in wireless

sensor networks: a survey, Pervasive and Mobile Computing 4

(2008) 303–334.

[20] Y. Li, M.T. Thai, F. Wang, C.W. Yi, P.J. Wang, D.Z. Du, On Greedy

Construction of Connected Dominating Sets in Wireless Networks,

Special issue of Wireless Communications and Mobile Computing

(WCMC), 2005.

[21] K.M. Alzoubi, X.Y. Li, Y. Wang, P.J. Wan, O. Frieder, Geometric

spanners for wireless ad hoc network, IEEE Transactions on Parallel

and Distributed Systems 14 (4) (2003) 408–421.

[22] F. Dai, J. Wu, An extended localized algorithm for connected

dominating set formation in ad hoc wireless networks, IEEE

Transactions on Parallel and Distributed Systems 15 (10) (2004)

908–920.

[23] S. Butenko, X. Cheng, C. Oliveira, P.M. Pardalos, A new heuristic for

the minimum connected dominating set problem on ad hoc

wireless networks, in: Recent Developments in Cooperative

Control and Optimization, Kluwer Academic Publishers., 2004,

pp. 61–73.

[24] P. Wightman, M. Labrador, A3: A topology control algorithm for

wireless sensor networks, in: Proceedings of IEEE Global

Communications Conference (GLOBECOM), New Orleans, USA, 2008.

[25] K.S. Narendra, M.A.L. Thathachar, Learning automata: an

introduction, Prentice-Hall, New York, 1989.

[26] M.A.L. Thathachar, B.R. Harita, Learning automata with changing

number of actions, IEEE Transactions on Systems, Man, and

Cybernetics SMG17 (1987) 1095–1100.

[27] C.E. Bonferroni, Teoria Statistica Delle Classi e Calcolo Delle

Probabilit‘a, Pubblicazioni del R Istituto Superiore di Scienze

Economiche e Commerciali di Firenze, vol. 8, 1936, pp. 3–62.

[28] IEEE Computer Society LAN MAN Standards Committee, Wireless

LAN Medium Access Protocol (MAC) and Physical Layer (PHY)

speciﬁcation, IEEE Standard 802.11-1997, The Institute of Electrical

and Electronics Engineers, New York, 1997.

[29] W. Heinzelman, A.P. Chandrakasan, H. Balakrishnan, Energy-

efﬁcient communication protocols for wireless microsensor

networks, in: Proceedings of the 33rd Hawaii International

Conference on System Sciences, 2000.

[30] J. Akbari Torkestani, A new distributed job scheduling algorithm

for grid systems, Cybernetics and Systems 44 (1) (2013) 77–

93.

[31] J. Akbari Torkestani, An adaptive learning to rank algorithm:

Learning automata approach, Decision Support Systems 54 (1)

(2012) 574–583.

[32] J. Akbari Torkestani, A distributed resource discovery algorithm for

P2P grids, Journal of Network and Computer Applications 35 (6)

(2012) 2028–2036.

[33] J. Akbari Torkestani, Degree constrained minimum spanning tree

problem: a learning automata approach, The Journal of

Supercomputing (2013), in press.

[34] J. Akbari Torkestani, An adaptive heuristic to the bounded diameter

minimum spanning tree problem, Soft Computing 16 (11) (2012)

1977–1988.

[35] J. Akbari Torkestani, An adaptive focused web crawling algorithm

based on learning automata, Applied Intelligence 37 (4) (2012) 586–

601.

[36] J. Akbari Torkestani, LAAP: A learning automata-based adaptive

polling scheme for clustered wireless Ad-Hoc Networks, Wireless

Personal Communication 69 (2) (2013) 841–855.

[37] J. Akbari Torkestani, Mobility prediction in mobile wireless

networks, Journal of Network and Computer Applications 35 (5)

(2012) 1633–1645.

[38] J. Akbari Torkestani, M.R. Meybodi, Finding minimum weight

connected dominating set in stochastic graph based on learning

automata, Information Sciences 200 (2012) 57–77.

[39] J. Akbari Torkestani, A learning automata-based solution to the

bounded diameter minimum spanning tree problem, Journal of the

Chinese Institute of Engineers (2012), in press.

[40] J. Akbari Torkestani, An adaptive learning automata-based ranking

function discovery algorithm, Journal of Intelligent Information

Systems 39 (2) (2012) 441–459.

J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666 1665

[41] J. Akbari Torkestani, A new approach to the job scheduling

problem in computational grids, Cluster Computing 15 (3)

(2012) 201–210.

[42] J. Akbari Torkestani, Mobility-based backbone formation in wireless

mobile Ad-hoc Networks, Wireless Personal Communication (2012),

in press.

[43] J. Akbari Torkestani, Energy-efﬁcient backbone formation in wireless

sensor networks, Computer and Electrical Engineering (2013), in

press.

[44] J. Akbari Torkestani, An energy-efﬁcient topology construction

algorithm for wireless sensor networks, Computer Networks

(2013), in press.

[45] J. Akbari Torkestani, An adaptive backbone formation algorithm for

wireless sensor networks, Computer Communications 35 (2012)

1333–1344.

Javad Akbari Torkestani received the B.S. and

M.S. degrees in Computer Engineering in Iran,

in 2001 and 2004, respectively. He also

received the Ph.D. degree in Computer Engi-

neering from Science and Research University,

Iran, in 2009. Currently, he is an assistant

professor in Computer Engineering Depart-

ment at Arak Azad University, Arak, Iran. Prior

to the current position, he joined the faculty

of the Computer Engineering Department at

Arak Azad University as a lecturer. His

research interests include wireless networks,

multi-hop networks, fault tolerant systems, grid computing, learning

systems, parallel algorithms, and soft computing.

1666 J. Akbari Torkestani / Ad Hoc Networks 11 (2013) 1655–1666