Vol 4 - Info. Tech

Published on July 2016 | Categories: Types, Research, Internet & Technology | Downloads: 40 | Comments: 0 | Views: 442
of 42
Download PDF   Embed   Report

Comments

Content


1

Continental J. Information Technology 4: 1 - 8, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

THE USE OF WEB DIRECTORIES AS INGREDIENTS FOR ARCHITECTURAL RESEARCH IN NIGERIA

1
Salami Rafiu Olugbenga and
2
Olaniyan-Abdusalam Monsurat Kehinde
1
Department of Architectural Technology, Faculty of Environmental Studies, Rufus Giwa Polytechnic, Owo,
2
Department
of Architecture, School of Environmental Technology, Federal University of Technology, Akure, Nigeria

ABSTRACT.
It is a well known fact that the progress in any discipline and profession is ultimately related to the
quality and quantity of its engagement with the production of knowledge through research. By
engaging in research, it can enhance design and ultimately broaden the scope of architectural practice
in Nigeria. The impact of the information and communication technology in this direction is
unprecedented. On the web today, there are millions of relevant information that can serve as
ingredients for a good architectural research. This paper explores one of the major resources for
locating vital materials on the internet, web directories. The search techniques, features and its
benefits are analyzed. It offers numerous resources for e-books, e-journals; digital collections coupled
virtual library and other valuable information. The paper advocates creation of versatile web portals
that are heavily loaded with local content through collaboration with architectural educators and other
stakeholders so as to accelerate a meaningful but effective research in architecture.

KEYWORDS: search engines, web directory, web portal, architectural research

INTRODUCTION
Globalization is the new bride of every academic discourse. Every discipline sees it as a common asset although it is
understood from the specialized spectacles of the materials of academic concerns (Akpan, 2008). Architectural research in
Nigeria is not left out, it has been affected greatly by the new Information and Computer Technology(ICT); in fact ICT is
now an integral part of research. One of the most efficient ways of conducting research is to use the internet. An internet
user has access to a wide variety of services such as electronic mail, file transfer, vast information resources, interest group
membership, interactive collaboration, multimedia displays, real-time broadcasting, breaking news, shopping opportunities
and much more. This paper therefore intends to explore the efficacy of web directories, its search strategies, traits and
benefits. It also provides several resources for e-books, e-journal, digital collections and online sources for printed
materials in architecture.

Architecture
Architecture is a discipline and a profession with its own unique body of knowledge and generally defined as art and
science of design and erection of building. It manipulates space in order to provide physical objects to accommodate
human activities, incorporating the wide varieties of activities and human users (Amole, 2004).

Research
Research is essentially a way of thinking; it is a manner of regarding accumulated fact so that a collection of data becomes
articulate to the mind of the researchers in terms of what those data mean and what fact say. Osuala (1987) admits that it is
the process of arriving at dependable solutions to problems through the planned and systematic collection, analysis, and
interpretation of knowledge, for promoting progress and for enabling man to relate more effectively to his environment, to
accomplish his purpose, and to resolve his conflicts. Quintarina (2007) agrees that research is simply the manner in which
men solve the knotty problems in their attempt to push back the frontiers of human ignorance.

Architectural research
Architectural research is therefore a research that focuses on the uniqueness of architecture as a discipline, and profession
together with its body of knowledge, ideologies and practice. The aim of Architectural research is to conduct research in
order to advance knowledge within the discipline by deploying universally valid research approaches and methods to
address the concerns in Architecture.



2




Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


Conducting Research on the Internet
There are four basic types of search tools on the internet;
(i) Search engines.
(ii) Web indices.
(iii) Web directories.
(iv) “Deep” or “invisible” web.

As illustrated in Fig. 1 they are useful for different types of queries. It is very important for searchers to know their
differences.


Search Engine




Web Indices SEARCH Web Directories
TOOLS




Deep Web

Fig 1: Four basic types of search tools

(i) Search engines: These are primarily first generation services that have been around for quite a while. Search
engine is a software tool that crawls the web searching for information (Ogunsote et al, 2003). These are
searchable database of internet files collected by a computer program called a wanderer, robot, spider or
crawler. Google is famous example of search engine.

(ii) Web indices: An index contains a copy of each web page gathered by the spider. It is a listing of the contents of
the web, usually in a particular category (Ogunsote et al, 2003)

(iii) Web directories: A web or link or subject directories are defined as categorized topics of collections of
information organized into a tree like structure where categories are used to define each groups association
(http://www.small-business-software.net/web-directory.htm).Directories, unlike search engines, are compiled
manually using real live humans. These “humans” select sites to be added to the directory, rather than
software programs.

(iv) Deep or invisible web: The deep web consists of information stored in searchable databases mounted on the web.
It is only accessible by user query. These databases usually search a targeted topic or aspect of a topic,

3

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


though entire web sites may be contained within a database. Search engine spiders cannot or will not index this
information. Information that is dynamically changing in content will appear on the invisible web. Examples include
news, job postings, available airline flights; etc.Completeplanet.com is a good example of deep web.

How web directories work
Web directories are usually much smaller than search engines databases, since the sites are looked at by human eyes
instead of spiders or software. The searchers works at sites organized in a series of categories menus. There are two ways
for sites to be included in a web directory’s listings either the site owner can submit the site to the web directory or the
directory’s editor(s) will eventually come across that site.

Table 1: Selected general web directories.
Web directories Address Comment
Open Directory Project (a.k.a
dmoz or ODP)
www: dmoz.org The largest directory constructed and
maintained by a “vast, global community of
volunteer editors”.
Look smart www.looksmart.com A web directory with its recent foray into
vertical search channel
Web directories Address Comment

Yahoo directory

www.yahoo.com
One of the beet directories on the web along
with its own search engine listings and many
other search services.

Ansearch

www.ansearch.com
Web search and directories focusing on the US,
UK, Australia & New Zealand.

Best of the web directory

www.botw.org
List content rich well designed website
categorized both by topic and by region

Google

Directory.google.com
Use hierarchical subject organization to locate
sites on a topic. Narrow down web searches
with too many results. see Fig.3
(Source: Internet search)

Table 2: Selected Academic & professional directories
Web directories Address Comment



Academic info



www.academicinfo.net
Gateway to college and research
level internet resources maintained
by former librarian Mike Madin and
a volunteer group of subject
specialist.


INFOMINE


http:infomine.ucr.edu
Large collection of scholarly internet
resources collectively maintained by
several libraries e.g. University of
California. See Fig.2
The internet public
library
www.ipl.org/div/books Large selective collection from the
University of Michigan.
Intute www.intute.ac.uk UK based collection of resources of
education and research.

The www virtual library

www.vlib.org
Guides to many disciplines
sponsored by the W3


Librarians internet index


www.lii.org
Carefully chosen, organized and
annotated directory maintained by
large group of librarians in
California.
(Source: Wikipedia (2008) and internet search.)



4

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


How to search a web Directory
(i) The searcher types a query into the web directory, or searches the web directory is indexed categories.
(ii) The web directory matches the searcher’s query with relevant data from its index.

Types of web directories
Web directories come in all shapes and sizes; some are generic, while others are highly specialized as shown in Tables 1, 2,
3&4. There are basically two types of directories;

(i) Academic and professional directories: created and maintained by subject experts or librarians to support the
needs of researchers and tend to be associated with libraries and institutions. These collections are created in order to
enhance the research process and help users find high quality sites of interest. A careful selection process is applied, and
links to the selected resources are usually annotated. As a rule, these sites do not generate income or carry
advertisment.Example: The Argus clearing house consists of highly selective subject guides that are often compiled by
expert; the guides are themselves evaluated by the clearing house staff.( http://www.stfrancis.edu/cid/docs/saerch)

(ii) Commercial portals directories: These cater for general public and usually competing for traffic. They are created
to generate income and serve the public. These services contain directories that link to a wide range of topics and often
emphasize entertainment, commerce, hobbies, sports, travel and interests not necessarily covered by academics directories.
Yahoo is the most example of a commercial portal. Researcher should use appropriate directory to meet his needs.




Fig.2: Showing a homepage of INFOMINE Directory
(Source: http:infomine.ucr.edu)







5

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010



Fig.3: Showing a homepage of Google Directory
(Source: directory.google.com)

Table 3: Selected specialist Web directory
Web directory Website Comment

Business.com

www.business.com
Business directory which charge a fee
for review as a pay per chick search
engine.

V funk

www.v-funk.com
Specializes in listing and categorizing
global dance music and urban life style
listings.
(Source: Wikipedia (2008) and internet search.)

Table 4: Selected commercial & portal directories
Web directory Website Comment

About

www.about.com
Large collection of topical
collections gathered by company-
certified subject specialists.

Joeant

www.joeant.com
Guide compiled by volunteers, listings
include information about each site
including multimedia,features,chat,e-
commerce,access limitations etc.


Jumpcity


www.jumpcity.com
A collection that offers a signed review
of each item and a link to any Usenet
newsgroup related to the topic.



WebBrain



www.webbrain.com
Java-based visual search engine of the
Open Directory Project index;
interface allows users to click through
an animated display of categories to
locate Web sites.
(Source: internet search)




6

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


The search techniques.
Many researchers do not make enough use of directories but instead go straight to search engines. For targeted, multi-
concept, and sometimes general queries search engine is advisable. But for research oriented queries involving an
exploration of a topic, web or subject directory is highly recommended.

5.1 Online books
Dot-coms, dot-orgs and dots-edus have become interested in offering free search of the world’s literature as found in books
and scholarly materials once results are found, searchers can access the material based on its copyright status. Materials out
of copyright are generally fully available for viewing and printing, while only snippets of text or abstracts are available for
copyrighted works. A good example of several sources of digital books on the web can be accessed freely on archnet.org,
architectureasia.com. These are online community for architects, planners, urban designers, interior designer with special
focus on the Islamic world. Publications can be downloaded with ease from the sites. Two notable sites for books search
are Amazon and Google books search. Scholarly materials in the form of electronic journals and other similar works are
also becoming available to be freely searched as shown in Table 5. These include Google scholar and Windows Live
search Academic.



Table 5: Selected Electronic Journals
Periodical Web address Comment
Architecture www.lib.berkeley.edu/ENVI/
architecture.html
The environmental design library of the
university of Berkeley.
Electronic
Journals and
magazines.
www.usg.edu/galileo/internet/electronic/elecj
our.html.
An index of electronic journals and
texts, including architecture journals.
Architecture
Journal
www.lib.strath.ac.uk/engweb/archej.html. A University of Strathclyde list of
electronic journals of relevance to
architecture available via the library
catalogue.

Architecture
designed houses


www.archmedia.com
Architects designed houses showcases
domestic architecture by selected
Australian architects. Architect designed
houses features houses and renovations
to suit a variety of styles and budgets.
ArchNet-IJAR www.archnet.org The new online journal of architecture
and urbanism-International Journal of
Architectural Research(IJAR)
RIBA Journal www.ribajournal.com/index.asp This is the popular Journal of Royal
Institute of British Architects(RIBA)
(Source: internet search)

Online libraries
These contain online collections of unique materials to support the needs of advance and highly specialized scholarship
(Encarta, 2008). There are several public, federal and university libraries on the web that feature online catalogues and
digital collections as shown in Tables 6-7.








7

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


Table 6: selected university libraries
Library Web address
Harvard University library

www.hacl.harvard.edu

MIT library

www.libraries.mit.edu


Princeton University library


www.infoshare1.princeton.edu


Yale University library

www.library.yale.edu

Louisiana State University www.lib.lsu.edu/weblio.html

(Source: Ogunsote et al 2003 and Internet search.)

Table 7: Selected public and federal libraries
Library Web address
Chicago Public Library

www.chipublib.org.

Library of Congress

www.leweb.loc.gov.

New York Public Library www.nypl.org
Philadelphia Free Library www.library.phila.gov
(Source: Ogunsote et al 2003 and Internet search.)

MIT Open Course Ware (OCW)
The MIT OCW allows people all over the world to tap into the reserves of knowledge from major institution around the
globe (Festa, 2003). It is a free and Open Educational Resource (OER) for educators, students, and self-learners around the
world. And publications of more than 1550 of MIT course which does not require any registration. (Massachusetts Institute
of Technology, 2008).

Web portal.
A web portal is a site that provides a single function via a web page or site. It often functions as a point of access to
information on World Wide Web (Wikipedia, 2008). It is a website that acts as a gateway to a type of product, an activity,
a profession, a business process, an industry, or any one of a multitude of subjects. These are places where people go to get
information, and even to chat, send email, shop online and form online communities (Ogunsote et al, 2007). ArchNetNG is
a good example of a web portal, though still very much in its infancy, it is highly commendable and encouragements
should be given to the architect of this premier web portal for Nigerian Architects. It is an online community for the
Nigerian architects, planners, urban designer, and landscape architects and for every thing that is Nigerian architecture.

CONCLUSION AND RECOMMENDATIONS
There is no iota of doubt that architecture is a unique discipline and architectural researchers in Nigeria need to experiment
the various search tools and techniques so as to equip themselves with reliable information related to architecture globally.
It can be accelerated by creating several web portals in the Nigeria context through collaborative effort of NIA, ARCON
and other stakeholders.




8

Salami Rafiu O and Olaniyan-Abdusalam Monsurat K: Continental J. Information Technology 4: 1 - 8, 2010


REFERENCES
Akpan, (2008): Globalization and the challenges of communication technology to the public Relations in Nigeria.
Uyo Journal of Humanities; vol. 9 pg 16-44.

Amole B, (2004). Architectural Research: An introduction. A Monograph of the Association of Architectural
Educators in Nigeria (AARCHES) No 1 pg. 7-20.

Encarta, (2008). Microsoft student with Encarta premium 2008 DVD.

Festa, P, (2003). MIT for free, virtually. CNETNews.com. Available from:
http//news.com./21000102505083840.html [Accessed to March 2nd 2008].

Massachusetts Institute of Technology, (2008).MIT Open Course Ware. Available from:
http://ocw.mit.edu/index.html [Accessed March 2nd 2008].

Ogunsote.O.O.et al (2003). The use of search Engines, web directories and indices on the World Wide Web for
Architectural Research in Nigeria. Presented at year 2003 Annual General Meeting and Conference of Association
of Architectural Educators in Nigeria (AARCHES).

Ogunsote.O.O.et al (2007).ArchNET: Draft Concept and Specifications Premier Web Portal Nigerian Architecture,
Paper Presented at the 2007Annual Conference of the Association of Architectural Educators in Nigeria
(AARCHES).

Osuala, E.C (1987). Introduction to Research Methodology, second edition Africana-FEP Publishers Limited,
Onitsha, Nigeria.

Quintarina, U (2007). Research in urban Designer. Web page retrieved from http: /online.trisakti.ac.ic /news / jurlemlit
/9quinoo.htm. [Accessed March 2008]

Wikipedia (2008).Web Portal. Available from http://en.wikipedia.org/wiki/Web_portal. [Accessed March
2008].

Received for Publication: 18/09 /2009
Accepted for Publication: 02/05 /2010

Corresponding Author:
Salami Rafiu Olugbenga
Department of Architectural Technology, Faculty of Environmental Studies, Rufus Giwa Polytechnic, Owo,















9

Continental J. Information Technology 4: 9 - 14, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

CONSTRUCTION OF A MULTIMEDIA-LEARNING ENVIRONMENT: SECRETARIAL STUDENTS EXPERIENCE

Isa, Abdullahi
Department of Secretarial Studies, Hassan Umsan Katsina Polytechnic, Katsina State

ABSTRACT
In recent years, the infusion of multimedia into teaching and learning has altered considerably the
instructional strategy in our educational institutions and changed the way teachers teach and students
learn. The traditional teacher-centred method of teaching, used for decades in our educational
system, has been modified and enhanced through multimedia. This paper focuses on creating
multimedia learning system in the computer laboratory of secretarial studies department of Hassan
Usman Katsina Polytechnic. Using a standardized method of random sampling, 35 ND students were
randomly selected for the construction of the multimedia learning environment. Students used
multimedia design process and the use of an authoring tool, Macromedia Director, and then apply the
knowledge they have gained to build a multimedia project of their own. It was discovered that the
multimedia learning approach empowers students to construct their own knowledge and enable them
to think critically, learn to work in teams and solve problems collectively.

Keywords: Multimedia, collaborative activity, computer oriented software, student-centred,
Macromedia, constructivist learning.

INTRODUCTION
With the rapid progress achieved in the last few decades in the Personal Computer (PC) and multimedia technologies, it is
now feasible and affordable to integrate multimedia technology into teaching and learning in the classroom. Multimedia
refers to any computer-mediated software or interactive application that integrates text, color, graphical images, animation,
audio sound, and full motion video in a single application. Multimedia learning systems consist of animation and narration,
which offer a potentially venue for improving student understanding (Mayer , 2000). Multimedia instruction (or a
multimedia learning environment) involves presenting words and pictures that are intended to promote learning. In short,
multimedia instruction refers to designing multimedia presentations in ways that help people build presentations in ways
that help students build mental representations.

The traditional chalk-and-talk method of teaching, which has been used for decades in our educational system, has been
modified and enhanced by the technological advances. The instructional media in this mode is essentially textual or
sometimes graphics. This traditional mode of learning is essentially modeled on the behavioural learning perspective.
Basically, the teacher controls the instructional process and is regarded as the source of expert knowledge, which is
communicated to the students through lectures in a classroom environment. The teacher decides how much information is
to be delivered to the learners while the students remain as the passive and obedient recipients of knowledge and
information and play little part in the learning process.

With the use of the PC and multimedia, the scenario immediately changes. Multiple media can now be used in presenting
the instructional materials and delivered in a multi-modal environment. Furthermore, educators can incorporate features
such as interactivity and navigational links into the content with the assistant of authoring tools such as Director and
Author ware, and enable the learners to interact with the content in the way he or she likes best. The presentation is non-
linear and is able to foster a two-way communication or interaction between the user and the computer. Learning can take
place at the learner’s own pace and time. This mode of learning is student-centred or self-directed learning, which will
cater to individualistic needs in learning. Unlike the mass learning method as practiced in the teacher-centric or directed
instruction mode. It is the aim of this paper to present a step by step approach to constructing a multimedia learning
environment in such a way that students can be independent in the learning process.





10

Isa, Abdullahi: Continental J. Information Technology 4: 9 - 14, 2010


Multimedia: a constructivist approach to learning
The constructivist approach to learning describes a learning process whereby students work individually or in small group
to explore, investigate and solve authentic problems and become actively engaged in seeking knowledge and information
rather than being passive recipients as in the traditional teacher-centric learning, which has its foundation embedded in the
behavioral learning perspective. In this traditional learning mode, basically, the teacher controls the instructional process,
the content is delivered to the entire class and the teacher tends to emphasize factual knowledge and the focus of learning is
on the content i.e. how much materials have been delivered and how much have the students learned. Thus, the learning
mode tends to be passive and the learners play little part in their learning process (Mayer, 1988).

In the student-centred learning mode, students play an active part in their learning process and become autonomous
learners who are actively engaged in constructing new meaning within the context of their current knowledge, experiences
and social environments. Learners become successful in constructing knowledge through solving problems that are
realistic and usually work in collaboration with others. Although developed in the 2
nd
half of the 20
th
century, the
constructivist learning approach has its foundation in cognitive learning psychology (Bruner, 1989).

Generally, constructivist learning places emphasis on the learner and propounds that learning is affected by their context
and their beliefs and attitudes. Learners are encouraged to seek information and knowledge on their own, determine how
to reach the desired learning outcomes themselves and build upon their prior knowledge and experiences rather than
relying on teachers to supply them with information. In a constructivist learning environment. Students learn by fitting
new information together with what they already know and actively construct their own understanding. Learning takes
place in a meaningful, authentic content and is a social, collaborative activity, where peers play an important role in
encouraging learning. In doing so, they gain a deeper understanding of the event and thereby constructing their own
knowledge and solutions to the problems. In this learning mode, the focus is on the learning process rather than on the
content i.e. learning ‘how to learn’ rather than ‘how much is learned’. This learning environment encourages students to
develop critical thinking skills, problem-solving and team skills, experiential learning and inter-disciplinary knowledge,
with technology being integral to their learning. It also represents a move away from the traditional modes of education to
one where the learners are active participants in the learning process.

The Methodology
The adoption of multimedia technologies in the classroom teaching and learning environment has made it possible for
learners to become involved in their work and create multimedia application as part of their project requirements. This
would enable them to become active participants in their own learning process, making use of the knowledge presented to
them by the lecturer, and represent them in a more meaningful way, using different media elements, instead of just being
passive learners of the educational content. As such, multimedia application design offers new insights into the learning
process of the designer and forces him or her to represent information and knowledge in a new and innovative way.

To create a student-centred learning environment in the classroom, students (N-35) OND Office Technology and
Management, were grouped into seven. Each group had to decide on their group members, their team topic and their group
leader. The group assigned various tasks to their members and managed their own projects, which is designing of an
automated office. They had to use a multimedia authoring tool, Macromedia Director, as the instructional tool to create the
project and deliver it on a CD-ROM. As a group, students had to decide on the conceptual model of their presentation, the
design of the multimedia interface, navigation paths and the interactive features used to best convey their topic of interest.
In the process, they hat to employ their experience and previously acquired knowledge in different disciplines to
breakdown the application design into various component parts, synthesize the media elements that represent the
information, create the digital interactive application, and work as a tem to accomplish the project’s overall objectives as
well as their own learning outcomes. The teacher and students met twice to accomplish the project's overall progress of
their group projects and to consult on any issues or concerns that they may arise. Students were given the entire semester
(15 weeks) to develop their projects. Since these students have had no prior knowledge in multimedia authoring and
authoring tools, they were given lectures and tutorials in order to provide them with basic skills in multimedia application
development. They



11

Isa, Abdullahi: Continental J. Information Technology 4: 9 - 14, 2010


have a prior experience in design and other multimedia software such as Adobe Photoshop, Macromedia Flash and
SoundForge, which they can utilize together with Director to develop their projects.

This student-centred learning environment is constructivist in approach in that multiple perspectives to the problems can be
developed and students can actively participate in their own learning process. Thus, by designing a multimedia application
that is multi-sensory and interactive, students are challenged to develop skills in problem-solving, and to exercise
analytical, critical and creative thinking in their work, to learn more about their chosen subject material and to develop
their abilities to analyze and draw conclusions from it. The role of the teacher in this class was that of a facilitator and
consultant to these students, supporting them in their process of learning and constructing their projects.

The Construction of Interactive multimedia
The students’ interactive multimedia development process began with an ideation process that was implemented using
technology and finally resulting in a tangible final product. That is, the interactive media application, which was turned in
to the lecturer for evaluation. In terms of documenting their development, students undertook a multimedia development
process. A five-phase procedure that led them from the drawing board to the computer and finally to the CD-ROM. The
five phases of the MDP were as follows:

Phase One: Planning and Organizing
In this initial stage, students had to first decide on their team members. Here, the groups embarked on a planning and
organizing process, which entailed scheduling meetings for discussion. Working with their timetables for research and
development times, team organization, and discussion of the team’s topic. In this phase, brainstorming was prevalent and
the ideation process gave way to the conceptualization of their pool of topics, which in due course was narrowed down to
one. The chosen topic was then presented to the lecturer for discussion. Storyboarding was also carried out to better
visualize the topic and its flow of action, and division of tasks were made.

Phase Two: Research and Information Gathering
In this phase, the groups set out to collect as much information about their chosen topic as possible. Many of this
information gathering and research activities were carried out on their own time; which included interviews, collecting
brochures, and meeting with key persons. Materials that were gathered during this process were mostly analogue data.
The groups had to learn about being professional in their approach as many were dealing with the corporate industry,
improve their presentation and communication skills in order to access relevant people and be successful in their
endeavours.

Phase Three: Digital Media content Acquisition
The digital media content acquisition involved groups organizing the digital media content to be used in their final
application and to plan to acquire them. This included deciding on whether to create the digital media content themselves,
or use third-party content. For example, many groups wanted to use digital video footage for their applications, and
therefore had to use a digital video camera and edit the media appropriately. Other groups preferred to use footage in the
polytechnic archives. Whatever their sources, these groups would have to use media-editing software such as Adobe
Photoshop (for images), Macromedia Sound Forge (for sound), and Adobe Premier (for video) to achieve their aims.

Phase Four: Multimedia Authoring
This phase involves three significant procedures: the integration of the digital content, the incorporation of interactive and
navigation, and packaging it onto a CD-ROM. Here, students would use the multimedia authoring tool, Macromedia
Director, to integrate all the media elements that they had acquired during the previous phases. After doing that, they
would then decide on the navigational structure of the application, based on their storyboards. Interactive features were
also incorporated into the application to create user involvement. These features include hotspots, menus, buttons,
hypertext and hyperlinks. Finally, when the application was completed, it would be packaged into a standalone
application.




12

Isa, Abdullahi: Continental J. Information Technology 4: 9 - 14, 2010


Phase Five: Reflection
Here students were given a chance to reflect on their work and their interactions with their team members and group
leaders. This was carried out via students’ feedback and interviews.

Assessment and results
In terms of assessment. Several criteria were applied to the projects. In particular, the students were assessed on:
1. Their creativity and originality in developing their applications.
2. the depth of content displayed in the applications and their documentation
3. The successful and effective transformation of their concept from ideation to the final executable
product.
4. The successful and effective implementation of the multimedia design process, with the appropriate use
of the multimedia authoring tool, Director, and other helper applications.
5. The level of difficulty achieved in using the multimedia authoring tool in terms of navigation and
interactivity.
6. Proper representation of the content via media elements.
7. teamwork and group management
8. Overall presentation of digital application and documentation.

Overall, the class did well in their projects and class results. 31% if the class achieved ‘A’ grades. 57% achieved ‘B’
grades and 13% achieved ‘C’ grades. The result is consistent with the overall class performance in doing the project,
where those who scored high overall results in the class also scored high marks for their projects.

Evaluation of Student Learning
Students in this student-centred learning environment were evaluated through a 5-point Likert scale (N=35), with 1
strongly Disagree (SDA), 2 for Agree (A), 3 for Undecided (UD), 4 for Disagree (D), and 5 for Strongly Agree (SA). The
items in the multimedia environment made up several construct to measure many student-centred learning traits, such as
problem-solving skills, collaborative efforts and teamwork. The results of the multimedia environment showed that all
items measured yielded means of 3.64 and above (see table 1), thus showing that the students were enthusiastic about the
project and were very positive in their attitudes towards working on a multimedia project and in working in teams.

The statistical tools used
Descriptive statistics was used to analyze the data in this study. The measure of central tendency, or the mean was
statistically used in the data analysis.

Mean: The average value of a dataset, i.e. the sum of all the data divided by the number of variables. The arithmetic mean
is commonly called the "average". When the word "mean" is used without a modifier, it usually refers to the arithmetic
mean. The mean is a good measure of central tendency for symmetrical (e.g. normal) distributions but can be misleading in
skewed distributions since it is influenced by outliers. In general, the mean is larger than the median in positively skewed
distributions and less than the median in negatively skewed distributions. It is based on this advantage that the study
adopted the use of Mean in the data analysis.

Formula for the arithmetic mean: where X is the raw data and N or n is the number of scores




13

Isa, Abdullahi: Continental J. Information Technology 4: 9 - 14, 2010


RESULTS AND DISCUSSION

Table 1: Means for items on multimedia environment on students experience.
Items Mean
Found the Project challenging 4.17
Better able to represent concept using digital multimedia 4.15
Project allowed me to be creative in my thinking 4.15
Able to have creative input 4.02
Understood subject better after project 3.98
Project allowed me to think critically about the topic 3.98
Felt very motivated doing this project 3.98
The group was able to achieve its goals 3.83
Able to learn more working with team members 3.83

The multimedia environment showed that students responded positively to the multimedia projects. Multimedia projects
are challenging to student (
p
=91%, mean = 4.17). Item 2 showed that the project enabled them to think creatively (p=91%,
mean= 4.15). Item 3 indicated they were able to represent effectively their concepts, knowledge and information using
multimedia elements (p=91%, mean 4.15). Students indicated that their understanding of the subject matter became better
(p=89%, mean = 3.98), in tem 4 while doing the project own their own. This is in line with the work of Wilson (2001),
that students undertaking a multimedia learning environment achieve 78% better understanding of the subject matter under
study.

In items 5 and 6, it was observed that students were able to have creative input in the projects (p=83%, mean= 4.02), and
exercised their critical thinking on their topics (p=83%, mean=3.98). Students were motivated by the project (p=76%,
mean=3.98), achieved group goals (p=83%, mean=3.83) and enjoyed their teamwork with their peers (p=83%,
mean=3.83). This agrees with the findings of Petre (2001) in his study of students’ learning behaviour in a multimedia
learning environment, that students achieve their best and attain high scores when allowed to build their own learning
environment through a multimedia approach.

It was observed that the items rated highest by the students in the multimedia environment concerned the use of multimedia
in the projects, the ability to be creative and the challenge that the project posed to them. This indicated that these students
liked to conceptualize and express their ideas with a combination of media elements and bring to the project innovative
ideas, which were derived from their group discussion and brainstorming sessions among themselves.

From the above observations, it can also be concluded that his project empowered students to develop and exercise their
creative and critical thinking skills through their organizational and research activities and in the conversion of their initial
ideas and concepts into their concrete projects. They learned about planning development and project management, and
how to select the appropriate hardware and software applications for their development, skills that are important in a real
world setting. In addition, through working with an authoring tool and using other applications to build their projects,
students were able to acquire the technical skills in multimedia technology and incorporate interactive features into their
presentation. Furthermore, in many of the groups, students had to develop their communication and presentation skills
since gathering information on their chosen topics involved interviewing people and visiting the actual sites. They also had
to learn to select the appropriate information to display in the electronic applications.

In the multimedia environment, teamwork was rated favourably (mean=3.83), indicating that students found teamwork and
cooperation essential for them to complete their project and to take advantage of the skills and expertise of teach member
of the team. In the interviews conducted during the “Reflection” phase on the MDP, some of the feedback expressed by
the students included:
1. “we got to know each other better since ewe spent a lot of time together”



14

Isa, Abdullahi: Continental J. Information Technology 4: 9 - 14, 2010


2. “We learnt more about out topic. Fun to know everyone on the team and had fun shooting video, never
done it before”.
3. “we learnt more about multimedia, developing a CD-ROM, software, navigation and interactivity”

The students were able to develop interpersonal skills and take part in brainstorming activities while making decisions
concerning their project. Many expressed the ability to work through their problems via group discussions as an integral
part to the successful completion of their projects and in achieving their group goals (Trevitt, 1995). This aspect of the
project is important as teamwork is crucial to the success of a knowledge-based IT curriculum in which collaboration and
knowledge-sharing among students in a team constitutes the very essence of electronic learning.

CONCLUSION AND RECOMMENDATIONS.
The results indicated in this project showed clearly that multimedia technology greatly influences a student’s learning
process and widens the scope of learning skills and knowledge. This multimedia mode of learning provides an alternative
to the traditional teacher-centric learning and enables students to enjoy a richer e-learning environment. It empowers
students to become active learners and display their ideas and information in terms of the mode format and use their higher
level thinking skills like analysis, synthesis, evaluation and reflection while solving authentic problems. This learning
mode also allows the teacher the flexibility to present their curriculum in an innovative manner.

The teacher should act as a facilitator, a consultant or guide on the side, helping students to access, organize and obtain
information. This is to provide solutions to the problems rather than one the one supplying and prescribing information and
knowledge to the learners, as in the traditional behaviourist learning mode. In this learning mode, student learning, in
particular, the learning process, should become the main focus, not the content. The teacher should employ the necessary
technology that could provide the most desired multimedia learning environment; thus the teacher should always
encourage student-centred learning environment using multimedia technology can contribute substantially toward
enhancing student learning and the learning process.

Reference
Bruner, D. (1989). The Effectiveness and Cost of Interactive Videodisc Instruction. In Machine-Mediated Learning,
volume 3, 361-385

Mayer, R. (2002). Aids to Computer-based MultimediaLearning. In Learning and Instruction, volume 12, 107-119.

Mayer, L.J. (1988). Multimedia information and learning. In Journal of EducationalMultimedia and Hypermedia, 5,129-
150

Petre, D. (2001 ), The effect of Multimedia Learning Environment of Students’ Performance.The Journal of Instructional
Technology, Vol. 2(11), pp 12-15.

Quintana, Y. (1996). Evaluating the Value and Effectiveness of Internet- Based Learning. INET96 Proceedings. Montreal,
Canada. Check. This was not cited in the text

Trevitt, A. (1995). Interactive Multimedia in University Teaching and Learning:Some Pointers to Help Promote
Discussion of Design Criteria. Presented at the session on package development at the Computers in University Biological
Education virtual Conference.

Wilson, K. (1992). Interactive Multimedia Learning Environments (pp.186-196). Berlin: Springer- Verlag.

Received for Publication: 12/04 /2010
Accepted for Publication: 02/05 /2010




15

Continental J. Information Technology 4:15 - 19, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

DETERMINANTS OF RESEARCH PUBLICATIONS AMONG UNIVERSITY LECTURERS IN ACADEMIC
JOURNALS: A STUDY OF THE CROSS RIVER UNIVERSITY OF TECHNOLOGY, NIGERIA.

1
Ogar, N. E.,
2
James, E. U.,
2
Adinya, B.I.;
3
Agba, O. A.,
4
Essien, A. and
4
Agbor, R. N.
1
Dept. of Forestry and Wild Life Resources Mgt.,
2
Dept. of Agric. Econs/Ext.,
3
Dept. of Agronomy, .
4
Dept. of Animal
Science, Cross River University of Technology, Obubra, Nigeria.

ABSTRACT
Of all the expected duties of an academic, it has been reported that, research takes precedence. This
study examined the determinants of research publications among Lecturers in the University. Data
from 100 Lecturers from 10 randomly selected departments were used. Findings revealed that, a
significant relationship exist between the frequency of research publications and the environment of
research. There is thus, a dire need to increase funding in the area of research and make the
environment of research less restrictive to the obstacles that limits research publications.

KEYWORDS: Research Publications, Cross River University of Technology (CRUTECH).
INTRODUCTION
The basic idea behind research is an attempt by man to re-examine the world around him by asking questions and also
possibly proffering or attempting answers to certain phenomenon that are intriguing to both him and/or society.
Enoh (2004) thus defines research as a scholarly or scientific investigation or experimentation arrived at the discovery of
facts. He posited further that such investigation or experimentation must be based on the revision of hitherto to accepted
theories, premises or laws. This is why as a process of critical inquiry; Popper (2002) stated that judgement is normally
reserved until empirical process takes precedence over speculations.

From the foregoing, it surmises to state that, research is a very indispensable tool for stakeholders in societal and
development problems.

Eboh, cited in Anugom et al (2002) points to the fact that, research provides the basis for decision making by policy
makers. He further posits that through research, a country can devise alternative policies, measures and strategies and
examine their implications. This is more so because according to Nwagu (1991) and Ada et al (1971), the knowledge
contained in research publications are supposed to represent the most recent or latest ideas. In his contribution, Peil (2004)
stated the importance of research by pointing out the dynamism and vibrancy inherent there in. He stated further that
“every research work has to initially study to seek out …..” and thereafter seek out again and take another more careful
look to find out or search for new horizons or down turns within the phenomenon under study. Research attitudes thus
presumes that either the first look, investigation or premise and every later look, may be prone to errors and so becomes
necessary to examine and re-examine in order to derive effective, valid, reliable and practical solutions to problems.

In the academic environment, of the three expected duties of an academic: teaching, research and services, research takes
precedence. This is evident in such age slogan as “publish or perish” and the attachment of promotion of lecturers
is being based on the number of research publications that they have. This awareness is the propelling force that has
always impelled lecturers to embark on research publication from time to time either in local or international journals.

Generally, knowledge, and particularly new knowledge as earlier stated is derived from research (Oluikpe 2002). Two
types of research – the basic and applied – have been identified. Basic research is knowledge derived from new findings or
discoveries while applied research is the application of these findings to solve practical solutions or problems. It is
pertinent to point out here that there is an inter-relatedness of two forms of research. Oluikpe (2002) posits that basic
research oils these the wheel of applied research. In other words, there is a synergy between the two.






16

Ogar, N. E et al.,: Continental J. Information Technology 4: 15 - 19, 2010


In spite of all the above recognized importance of research however, Alao (1994) noted quite frustratingly that most
journals that are found on the shelves of very many Nigerian Universities and even research based institutions more often
have a five year lag before their next publications. This problem is further aggravated by the regularity or frequency of
research papers churned out by academics, which is said to be low. It is therefore necessary to take a look at some of the
determinants of research publications among University lecturers in academic journals.

CONCEPTUAL CLARIFICATIONS
In other to properly focus this study, some concepts need brief clarifications. University lecturers, frequency of publication,
local and international journals, university environment.

A University lecturer is based here to mean all those who are engaged in teaching, research and services within the services
of the university. Specifically, in the Cross River University of Technology, they are those from the rank of an assistant
lecturer and who hold a degree that is higher than the first degree in their various disciplines.

Frequency of publication denotes the regularity in terms of number of research publications per year as evident by either an
acceptance letter from the editors of a journal or its actual publication in any of the authors of such journals.

Local / international journals as used here is a neutral concept applied to the location between the publishing house and the
author(s) of a work. In other words, any journal that is outside the national geographic borders of an author is tagged
“International” while those published within the geographic borders is known as “Local” journal. There is no suggestion
here to the superiority or inferiority status of any one of these two nomenclatures because what is assumed Local in one
situation to an author is counted as international for another author who lives across the border where that same journal is
published.

University environment is used here to mean that environment which supports those facilities that creates an enabling
access for the conduct of research and research activities.
It has to do with everything that stimulate and influence the behaviour of the individual (University lecturers) and group
(University Community) such as availability of the internet services, inadequate number of systems / computers,
interruption in power supply, severe breakdown, etc.

METHODOLOGY
Survey method was used. A total of 100 lecturers were randomly sampled from 10 departments that cut across the five
campus of the Cross River University of Technology. Of these total numbers of respondents, eighty-four were males,
sixteen were females.

Table 1: Sample distribution of respondents by faculty, department, sex and campus location.
Faculty Department No. of Respondents Campus
Male Female
1.
2.
3.

4.

5.

6.
Engineering
Mass
Communication
Medical
Sciences

Management
Sciences

Agriculture and
Forestry
Education
Electrical
Mass Communication
Bio Chemistry
Physiology
Accounting
Bus. Administration
Agric. Econs/Ext.
Forestry
Physical & Health
Guidance & Counseling
9
8
8
9
7
8
9
10
8
7
1
2
2
1
3
2
1
-
2
3
Calabar
Calabar
Okuku
Okuku
Ogoja
Ogoja
Obubra
Obubra
Akamkpa
Akamkpa
84 16
Source: Field work 2007.


17

Ogar, N. E et al.,: Continental J. Information Technology 4: 15 - 19, 2010


Data for this study was gathered with the use of questionnaires, which was carefully designed in accordance with the
specification of the research. The first section, that is, section “A” elicited response on the socio-demographic
characteristics of respondents. Section “B” on the other hand contained questions on the issue under investigation, which
is the determinant of research publications among university lecturers in academic journals. Items included in this section
consisted of variables such as number of publications by respondents (in both local and international journals), sources of
funds for research, access to research facilities. Data were analyzed with simple descriptive statistics in simple percentages
and in addition, the Pearson product moment correlation coefficient (r) was used to test the relationship that exists between
the frequency of research publications and the environment of research itself.

PRESENTATION OF RESULT / DISCUSSION OF FINDINGS:
Table 2: Number of published research findings, according to departments and location of campuses.

S/N
Department Location of Campus No. of Research
publications
Percentage

1.
2.
3.
4.
5.
6.
7.
8.
9.
10
Electrical Engineering
Mass Communication
Biochemistry
Physiology
Accounting
Business Admin.
Agric. Econs/Ext.
Forestry
Physical and Health
Edu.
Guidance and
Counseling
Calabar
Calabar
Okuku
Okuku
Ogoja
Ogoja
Obubra
Obubra
Akamkpa
Akmakpa
28
37
26
22
46
40
47
39
52
64
.07
.09
.07
.06
.12
.10
.12
.10
.13
.16
401

From Table 2, a total of 401 research publications were carried out by lecturers, in ten sampled departments of the
University. Arising from the above therefore, the Pearson Product Moment Correlation Coefficient (r) was used to test if
any relationship exists between the frequency of research publication and the environment of research itself. Specifically,
the following hypothesis was tested.

H
o
: There is no significant relationship between the frequency of research publications among university lecturers and the
environment of research.

H
1
: Environment is a significant factor in the frequency of research publications among university lecturers.

Decision rule: Accept or retain the null hypothesis if the calculated (r) value is equal to or less than the tabulated (r) value
at 0.05 level of significance with 399 degrees of freedom. On the other hand, reject the null and accept the alternate
hypothesis.

The dependent variable in the above stated hypothesis is thus, the frequency of research publications while the independent
variable is the environment of research. To test for this hypothesis, data from the independent variable was extracted into
sums, squares, products and sums of squares.

The summarized data were then subjected to statistical analysis using the Pearson product moment correlation analytical
procedure – r. The result of the analysis is as presented on the Table 2.







18

Ogar, N. E et al.,: Continental J. Information Technology 4: 15 - 19, 2010


Table 3: Pearson Product Moment Correlation Coefficient (r), analysis of the relationship between the frequency of
research publication and the environment of research.
N = 401

Variable
Σx Σx
2

Σxy df r – cal
Σy Σy
2
Environment of research (x)

Frequency of research
Publications (y)
4384 15625
31102 399 0.547

1975 18963
Source: Field work 2007 *Significant at P < 0.5, df = 399, Crit. – r = 0.196
.
From Table 3, the calculated r – value of 0.547 was found to be greater than the critical r – value of 0.196 needed for
significance at 0.05 alpha level with 399 degree of freedom. The null hypothesis was therefore rejected and the alternate
hypothesis accepted. This means that environment is a significant factor in the frequency of research publications among
university lecturers. Put succinctly, this means that university lecturers who are close to or have access to Internet services,
enhanced remunerations and relative ease in accessing funds from either government or the private sector among other
enabling factors have a higher frequency of research publications than those who do not.

CONCLUSION AND RECOMMENDATIONS
Several factors can be identified as constraining the amount of research work in circulation. Primarily however, the
environment of research as perceived in terms of better remunerations, ease in accessing funds and accessibility to Internet
services are major determinants.

Research, through its impact on education is an important determinant of development (Grabowski and Shield 1996).
Research is a priority at any effort towards fostering growth. This is more so that Nigeria’s capacity to generate
knowledge and participate in knowledge, society has continued to decline over the years. It is therefore the
recommendation of this paper that Internet services be provided in all the campuses of the Cross River University of
Technology at a lower subscription rate, which should be subsidized by the university authority. Also, there ought to be an
alternate source of power supply to avoid the frequent interruptions from the national grid. Again, donor agencies should
make it relatively easier to access funds once they conform to the authenticity / genuineness of a particular research project.

R E F E R E N C E S
Ada, N. A; Abu, C.N. and Ker, B. O. (1971). Essentials of thesis and project writing: A guide to research students in
tertiary institutions, Makurdi: Almond Publishers. Pp 5 – 10.

Alao, N. (1994). Higher education:- The university in Akintigbe O.O (ed). Nigeria and education. The challenges ahead,
Ibadan: Spectrum Publishers Ltd., Pp 63 – 72.

Eboh, E. C.(2000). “Social Science Research and Sustainable Development”. In Anugwom et al, The social sciences,
issues and perspectives, Nsukka: Fulladu Publishing Company. Pp 74 – 81.

Enoh, C.O. (2004). An Introduction to Empirical Research in Social Science and the Humanities. Uyo - Etofia Media
Services Ltd., Pp.84.

Grabowski, R. and Shields, M.P. (1996). Development Economics;Oxford. Blackwell Publishers Ltd., Pp 110 - 115.

Nwagu, B.G.(1991). Education Research: Basic Issues and Methodology. Ibadan: Wisdom Publishers. Pp 65.




19

Ogar, N. E et al.,: Continental J. Information Technology 4: 15 - 19, 2010


Oluikpe, B. (2002). Towards a New Direction in Igbo Studies Research. Journal of Liberal Studies. 9(1) Pp 11 – 28.

Peil, M. N. (2004). Social Science Research Methods. An African Handbook. London: Hodder and Stoughton Press. Pp
78 – 79.

Popper, G. J. (2002). Theory and Methods of Social Science Research. London: George Allen and Unwin Ltd., Pp 15 – 18.

Received for Publication: 16/03 /2010
Accepted for Publication: 02/05 /2010

Corresponding Author
Ogar, N. E.,
Dept. of Forestry and Wild Life Resources Mgt., Cross River University of Technology, Obubra, Nigeria.
E-mail: [email protected]

































20

Continental J. Information Technology 4:20 - 26, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

DESIGNING AND IMPLEMENTING A LOW – COST INITIATIVE FOR REMOTE SURVEILLANCE
TRANSMITTED OVER SATELLITE

E.O.Joshua
1
and Adesina
2
T. O.
1
Department of Physics, University of Ibadan, Ibadan, Nigeria and
2
Department of Physical Sciences (Physics with
electronics option), Bells University of Technology, Ota, Nigeria

ABSTRACT
Surveillance is a very important aspect of security for a lot of organisations. Hence, there is a need to
harness all available resources to cater for surveillance needs. The facilities available for surveillance
till date is not sufficient to cater for remote facilities hence there is a need to develop a new model for
surveillance, so as to be able to cater for remote facilities. Here, a new model is designed, which is
the surveillance over satellite, to be able to cover remote areas with surveillance facilities. The
surveillance system is unmanned and can be able to adequately cover the remote facility, at a low
cost. The basis for the design is discussed and hence, the conceptual model developed. This was
implemented using some of the available simulation tools like ADIsimPLL 3.1, Electronic
Workbench 10 and ADIsim R.F 1.2. This is followed by the practical application of the surveillance
system and an evaluation of the design is done, comparing it with the current trends in technology,
like the CCTV.

KEYWORDS: Surveillance system, Communication Systems, satellite communications,

INTRODUCTION
Communication as a field of study has evolved over time, with continual discoveries of ground breaking technologies to
cater for man’s communication needs. This spans various areas from data, voice and video communication to
communications for specific use. Applications in satellite communications have evolved over the years to adapt to
competitive markets. Evolutionary development is a natural facet of the technology because satellite communication is
extremely versatile. This is important to its extension to new applications yet to be fielded (Elbert, 1987).

Time and again, different hardware technologies have been implemented to cater for each of these needs, and on a regular
and continuous basis, these technologies are reviewed and improved upon so as to proffer the best possible solutions to this
needs and to also keep up with the pace of technological advancement. This is evident in the recent shift from analog
communication systems to digital communication systems. When discussing digital modulation techniques, a recent study
by Agilent Technologies reveal that “the move to digital modulation provides more information capacity, compatibility
with digital data services, higher data security, better quality communications and quicker system availability” (Agilent
Technologies, 2001). When talking about digital modulation, there are various techniques employed for digital modulation
which include; Frequency Shift – Keying (FSK), Amplitude Shift – Keying (ASK), Phase Shift – Keying (PSK), Minimum
Shift – Keying (MSK) and so on. In each of the above techniques, the phase, frequency or amplitude of the carrier signal is
shifted and keyed. Also, for efficient modulation, many trade-offs must be made in selecting a particular technique, the
trade-offs being defined by the communications environment, data integrity requirements, data latency requirements, user
access, traffic loading, and other constraints. These new modulation techniques have been known in theory for many years,
but have become feasible only because of recent advances in digital signal processing and microprocessor technologies.
(Marvin, 2003)

Another challenge which has been reduced to a very minimal level in communications systems is the problem of noise.
Noise is always present in every functional communication block (system), but the level to which it is reduced determines
the performance of such system. In a communication device, the only noise that is under the designer’s control is thermal
and inter-modulation noise of the system. Assuming that the system is designed to minimize these, we do not have any way
of reducing the noise (Langton, 2001). Also more often than not, analog systems have been very vulnerable to corruption
of signals by noise. But with the advent of digital systems, noise has been properly clipped, using the appropriate hardware
like filters and using veritable transmission techniques.


21

E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010


Recently, in satellite communications, PSK techniques have been discovered to be efficient for modulation and hence, it is
used mostly in satellite communications and some CDMA telephones. PSK, compared with other schemes, has excellent
protection against noise, because the information is contained within its phase (Kolawole, 2002).

Again, there is a wide variety of applications of satellite communications. This includes voice and video telephony;
television signals transmission, fixed satellite service, mobile satellite technologies, direct broadcast satellites, amateur
radio, IP over satellite, military communications and so on (Thomas et al., 2003; Monperrus, et al., 2008; SEVGI 2010).

The application of satellite communications is wide and varied, but there are still a lot of areas to explore, using satellite
communications systems. Since satellite communications is basically reliant on the principles and properties of
transmission and reception of RF signals, it can be applied to, practically, any aspect of communication be it digital or
analog. This then stands to mean that we can harness satellite communication system in the design of communication
systems for specific use.

Surveillance is a very important aspect of security and its importance cannot be undermined or overemphasized. This is
evidenced by the huge losses that have accumulated for different countries over time, due to lack of a good surveillance
infrastructure. Surveillance has also grown over the years from its analog to its digital form. Even beyond this is the more
recent Closed-Circuit Television (CCTV) approach to surveillance.

In this paper we present a new model of a low-cost, security-aware satellite communication system. The model is based on
the basic facts that digital signals can be more easily transmitted over the satellite than analog systems and that video
signals can be compressed and transmitted over the satellite.

METHODOLOGY
Modelling and Simulation of the Surveillance System
The transmission of video signals is the underlying principle in surveillance, but there are other intricacies that must be
considered in the design of the transmission and reception earth terminals to ensure that the surveillance system performs
optimally.

There are some basic factors which affect the efficiency of a communication system. These are the transmission power of
the signal, the bandwidth and the modulation technique employed for the modulation of baseband signals to RF for
transmission (Nelson, 1998).There are also constraints that are peculiar to the design of the system and they include: Citing
of communication facilities, Frequency band requirement for transmission; Availability of grid power supply to remote
location; Cost of linkage to transmission satellite transponders; Effect of vandalism on the remote facility, The range of
coverage of single cameras and Life span of the entire system. Hence there is always a need to balance the trade-off levels
between the factors and the constraints (Nelson, 2001). Thus to achieve maximum efficiency, careful selection has been
made of the hardware/components to be used. In a bid to balance the trade off between the factors that determine the
effectiveness of the communication system channel and the constraints posed by the site of the remote facility, the
following method was used:

An anomaly is expected to signal disruption of the normal activity of the remote facility. When an anormaly is detected in
the vicinity of the remote facility, by a simple intrusion detection circuit, which can be a sound, fire, smoke, or motion
detector circuit, a microcontroller unit receives a high logic from the detection circuit. The microcontroller will then
triggers the entire system on. This unit is programmed to function as expected. As soon as the system comes on, a link is
established with the network operating centre over the satellite and video is then collected from the cameras within that
region, compressed and transmitted over satellite. At this point it should be noted that video signals can be transmitted over
a long distance and through any transmission medium when transmitted in a digitized form and can be multiplexed with
any other signal. Hence, the video signal from a PAL based surveillance CCD camera can be sampled and digitized based
on the ITU-R BT656 standard. This can further be compressed using the MPEG-2, MPEG-4 or JPEG-2000 format. This
compressed digital video can then be sent through any medium.

The video stream from all cameras, alongside any other signal, be it data or sound (voice) can then be multiplexed,
modulated and sent over the satellite. To reduce the noise in the system, the signals are amplified and filtered. Once
22

E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010


transmitted, the incoming signal at the network operating centre wakes the system up and the streamed video is down
converted, demodulated and decompressed into the original analog format. This is then further encoded into the format for
delivery. At the network operating centre, the digital signal being sent is demultiplexed and the various streams from each
of the cameras are viewed by an array of LCD screens.

The incoming signals can be manipulated and stored on a retrieval system for further use, while security personnel is
informed about the detected anomaly.

The surveillance system was simulated and performance characteristics curves obtained. These curves measured the
performance of individual components required for the implementation of the device and also, a cascaded analysis was
carried out.

The simulation was done in two parts which included:
Simulation of variable response components’ performance. This includes measuring the performance characteristics of
each of the components and the working conditions of such system. This was carried out using simulation packages which
included ADIsimPLL 3.1 and Electronic Workbench 10.

Simulation of the transmission and reception stages in the design to get the values for the output characteristics. The
simulation package, used in this case is the ADIsimRF 1.2.

System Description
This model flow diagram is as shown in figure 1.

Fig. 1 Operational Flow Diagram for Surveillance over Satellite Model

From the model (Fig. 1), the surveillance system can be divided into four phases namely: the detection phase, which detects
an anomaly and triggers the other components on; the video processing phase, where the analog (PAL) signal from the
surveillance camera is processed and converted to a digital, compressed video; the transceiver phase, where the compressed
video is modulated, up – converted, transmitted, received, down – converted and de – modulated and the last phase, which is
the output phase. In the output phase, the signal undergoes decompression, and is converted back to the analog form for
viewing or is sent to a video / digital signal processor for further processing before viewing.


23

E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010


RESULTS AND DISCUSSION
The Simulation Results
The results of simulating the video signal as would be produced from the modulator for up – conversion or for direct
transmission through the satellite are shown in the graphs below.

Figure 2 shows that at a very low offset frequency, the single side band phase noise of the VCO is minimal and this produces
a nearly perfect (noiseless) modulated signal from the system. But with the increase in offset frequency, the noise in the
system increases considerably. An acceptable level for a typical synthesizer is about 60dBc/Hz; hence, the offset frequency of
the VCO should be kept at around 1 KHz.

Figure 3 shows the phase noise value of the individual components and the total phase noise of the PLL / Synthesizer. Here,
when the offset frequency is slightly above 100 Hz, the phase noise is lowest. Hence, this sets a threshold for the offset
frequency for optimum performance.

Figure 4 gives the FM response for the amplitude and the phase. At a threshold offset frequency of 1 KHz, the modulation
response is close to peak for both the amplitude and phase. The amplitude modulation response is about -12 dB with a phase
margin of about 118
0
. Also for the phase, the response is slightly above 0 dB with a phase margin of about 160
0
. This means
that for optimum performance of the synthesizer, the offset frequency should be kept around 1 KHz.

Hence from the above plots we can conclude that for the optimum performance of the digital synthesizer, as well as for the
modulated signal to maintain its phase characteristics and reliability, the offset frequency should be maintained around 1
KHz.

From figure 5 it can be deduced that the chip will work more favourably, producing a greater output power under low
temperature conditions but it can withstand temperatures as high as 85
0
C, without dropping so much in output power. Figure
6 shows that low temperatures are favourable for the operation of the demodulator and, hence; the temperature of the system
should be kept low for optimum functionality and performance. A suitable range of frequencies for optimum transmission is
between 1 and 100 MHz, at a response level of about 0 dB. This is a near perfect response. The noise Vs RF frequency plot
(Fig. 7) indicates that at a lower temperature, the noise figure is reduced to about 14.5 dB as against 17.5 dB when operating
at a frequency of about 1720 MHz. This also shows that low temperatures favour optimum performance of the chip, and in
general, the system.

Hence from the simulation of the modulator and the demodulator, we see the temperature of the system designed as a
determining factor for optimum performance of the system.



Fig. 2 Phase Noise Vs Frequency Plot for VCO Fig. 3 Phase Noise Plot at output Frequency



24

E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010



Fig. 4 Frequency Modulation Response at Fig. 5 Conversion Gain and Input 1 dB Compression
Output Frequency Point (IP1dB) vs. RF Frequency



Fig.6 Single Sideband (SSB) Output Power (POUT) Fig. 7 Noise Figure vs. RF Frequency
vs. Output Frequency and Temperature

-10
0
10
20
30
0 5 10 15
Transmitter
Receiver


Fig. 8 Plot of Input Power Output power
25

E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010


System application
This system, which is referred to for short as “surveillance over satellite”, will be found useful in many areas
including:
Monitoring of petroleum, petroleum products and gas pipelines

Monitoring of oil refineries

Monitoring of nuclear, gas and other power plants

Monitoring of water treatment plants for public use

Monitoring of power supply grids

Real time surveillance of assets

Tracking of goods and many more.

CONCLUSION
The surveillance over satellite model is based on the basic facts that digital signals can be more easily
transmitted over the satellite than analog systems and that video signals can be compressed and transmitted over
the satellite.

Evaluating the model design, two things can be deduced
There is a great deal of flexibility in the design. This is shown by the ease with which the components can be
rearranged in the case of future upgrade to the system. Each of the components can be replaced also with
another one should in case there is a need to change one of it. Hence the system is flexible. Another thing that
accounts for flexibility is the fact that since virtually all the components used are based on the monolithic
CMOS integrated circuit framework, then there is a little weight to grapple with.

The design helps achieve a low cost margin for the entire setup. Comparing the cost of designing and
constructing this new model for surveillance over satellite and the cost of the previous methods of using DVRs,
and other off the shelve finished products, the cost of the new model is about 30 % – 40 % less than the normal
system for the old system. Hence, the model is able to achieve the low cost objective.

Comparing the above model with the conventional closed circuit television approach to surveillance, CCTV
transmits via LAN and over the internet and is limited to areas which have access to internet facilities. Also to
dedicate a VSAT internet facility to a remote facility solely for surveillance purposes will amount to a huge
waste of the resources because of the cost of servicing both the ISP and the surveillance system, and in a case
where fiber optics is used for the transmission, it becomes very expensive and these fiber optic cables can get
damaged along the line. When this happens, it becomes very difficult to trace where the damage is and this
cause a lot of problems finding it out.

Also CCTV cannot be effective over great distances for example between countries; hence, it becomes
imperative to find another means to transmit the video data.










E.O.Joshua and Adesina T. O: Continental J. Information Technology 4:20 - 26, 2010


26

REFERENCES
Agilent Technologies, (2001) Digital Modulation in Communication Systems)

Couch W. L, (2007), Digital and Analog Communication Systems, Pearson Education Inc

Elbert, B. R., (1987) Introduction to Satellite Communication, Norwood, MA: Artech House.

Kolawole M. O, (2002), Satellite Communication Engineering, Marcel Dekker Inc

Langton C (2001) Intuitive Guide to Principles of Communication: Coding Concepts and Block Coding, www
complextoreal.com.

Marvin K. Simon (2001) Bandwidth Efficient Digital Modulation with Application to Deep Space
Communications, John Wiley.

Monperrus, M., F. Jaozafy, G. Marchalot, J. Champeau, B Hoeltzener and JM. Jezequel (2008) Model-driven
Simulation of a Maritime Surveillance System. Proceedings of the 4
th
European Conference on Model Driven
Architecture (ECMDA’2008)

Nelson R.A, (1998) A Primer on Satellite Communication.

Nelson R. A, (2001), Modulation, Power and Bandwidth: Tradeoffs in communication systems design, Via
Satellite

SEVGI Levent, (2010) Modeling and simulation strategies in high frequency surface wave radars. Turk J. Elec.
Eng. And Comp. Sci,Vol. 18,No. 3.

Thomas M.Cioppa, John B. Willis, Niki D. Goerger and Lloyd P. Brown (2003) Researh Plan Development for
Modeling and Simulation of Military Operations in Urban Terrain. Proceedings of 2003 Winter Simulation
Conference S.Chick,P.J.Sanchez,D.Ferrin, and D.J. Morrice,eds. Pg. 1046 – 1051.

Received for Publication: 12/04 /2010
Accepted for Publication: 02/05 /2010

Corresponding author
E.O. Joshua
Department of Physics, University of Ibadan, Ibadan, Nigeria
Corresponding author
Email: [email protected]















Continental J. Information Technology 4:27 - 30, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

IMPACT OF ICT ON CLIMATE CHANGE

E.C Arihilam and Eguzo C. V
27

Electrical/Electronic Engineering Department, Akanu Ibiam Federal Polytechnic, Unwana, Ebonyi State,
Nigeria

ABSTRACT
In spite of the clamour for the adaptation of technologies to bridge the ever growing digital
divide around the globe, information communication technology (ICT) has been identified
to assist in increasing the awareness and sensitization in mitigating the impact of climate
change and other related issues. The effects of global warming, greenhouse effects and
other human related environmental impacts, have been felt in different places, measures
and formations. As Nigeria continues to grow its information markets, this piece examines
the opportunities for emission savings in applying ICT and options that through enabling
other sectors to reduce their emission, the industry could decrease global emission. This
will enable a low carbon economy in this information age.

KEYWORDS: Chlorofluorocarbons, sensitization, e-sustainability, SMART, Proliferation,
communication.

INTRODUCTION
Climate change according to United Nations Framework convention on climate change is defined as a change of
climate which is attributed directly or indirectly to human activity that alters the composition of the global
atmosphere and which is in addition to natural climate variability observed over comparable time periods
(Orimisan (2010).

The intergovernmental panel on climate change (IPCC) described global warming as the increase in the average
temperature of earth’s near-surface area and oceans. Global surface temperature increased to 0.74 + 0.18
0
C at
the start of 20
th
century and climate model projections summarized in the latest IPCC report indicate that global
surface temperature is likely to rise further from 1.1 to 6.4
0
C during the 21
st
century.

Research has shown that this rapid climatic change include processes as variation in solar radiation, mountain-
building, continental drift, greenhouse gas concentration, human influence an more. The human factor
contribution include high level of CO
2
emission from fusil fuel combustion, land use, ozone depletion, animal
agriculture and deforestation (Climate change (2010).

ICT do contribute to global warming but more importantly they are the key to monitoring and mitigating its
effects, says the UN telecom agency’s Secretary – General, Hamadoun Toure. Since the Kyoto protocol was
adopted in late 1997, the number of ICT users has tripled globally and the sector releases some 2 to 3 percent
of all emission. But international telecommunication Union, ITU, stressed that these technologies are also part
of the solution to climate change and could help curb emission by anywhere between 15 to 40 percent,
depending on the methodologies used to come up with the estimates.

MAN-MADE CAUSES OF CLIMATE CHANGE
CO
2
is the chemical formula for carbon dioxide which is a chemical compound composed of two oxygen atoms
covalently bonded to a single carbon atom. Also carbon dioxide is the number one reason for man-made climate
change. Known as a colourless gas, consisting of one carbon and two oxygen atoms; carbon dioxide exists in
gaseous state at above -78.5
0
C and in solid state (dry ice) below -78.5
0
C. This characterization is among other
variable that could change it from solid to gas by sublimation.

CO
2
in combination with other chemical compounds produce chlorofluorocarbons (CFCs) which are responsible
for ozone depletion and global warming leading to climate change. Other components of ICT that contribute to
climate and environmental degradation include;

E.C Arihilam and Eguzo C. V: Continental J. Information Technology 4:27 - 30, 2010


ELECTRONIC EQUIPMENT
Personal Computers with cathode ray tubes (CRT) monitors, televisions etc that generate temperatures above
60
0
C at long working hours can reduce air quality, cause health related symptoms and reduce productivity of
office workers. The main source of pollution are from electronic components, plastic addictives and flame
retardants. At temperatures above 60
0
C, they have been known to produce chemical emissions. Printers and
28

copiers especially are known to introduce unhealthy solid particles and gasses into buildings and persons
including brominated flame retardants, dusts, ozone, volatile organic compounds and ammonia.

FOSSIL FUEL EXHAUSTION
Manufacturing computers is material intensive; the total fossil fuels used to make one desktop computer weigh
over 240kgs, some 10 times the weight of the computer itself. This is very high compared to many other goods.
For an automobile or refrigerator, the weight of fossil fuel used for production is roughly equal to their weights.
The environmental impacts associated with using fossil fuels are significant and deserve attention.

E-WASTE
The less regulation of trans-border shipment has led to shipping of significant quantities of used electrical and
electronic equipment to developing and underdeveloped countries. The condition and quality of these exported
items of electronics is substantially low because they are either not functioning or have only a short remaining
service life. Nigeria, Ghana, India, etc have been at the receiving end of these waste dumps which had
constituted health risk and environmental threat due to low quality waste management system (Sander and
Schilling, (2010). When these e-materials are out of use, they are either burnt, buried or dumped into water
ways. Much of the wastes end up being discarded along rivers and roads. In some cases, dealers and traders pay
some workers and scavengers a pittance to burn the plastic casings and wire insulations in broken machines and
strip out the “sought-after-material” such as gold and copper. The low-tech recovery process could expose such
workers and their local environments to lead, cadmium, mercury and other hazardous materials used to build
electronics. The workers can also be exposed to carcinogenic compounds called dioxins that are by-products of
incinerated plastics.

ICT IN GLOBAL CHALLENGES
ICT has significantly shaped the future and helped address global challenges. In a bid to aid recovery of the
global economy, a number of nations have invested in massive stimulus packages. In USA for instance, 13
percent of stimulus spent was dedicated to IT. It can therefore be logically agreed that IT is a foundation from
which much of the globes current challenges including climate change could be successfully addressed. The
world is currently facing challenging times characterized by multifaceted issues and concerns. Some of them
include sustainable development, population and resources, democratization, global convergence of IT etc.
These global issues have further been exaceberated by the current global financial issues which have led to
massive erosion of wealth as a result of job losses, stock market collapses, extreme volatility in oil and other
commodities. The four key IT trends that have the potentials of helping humanity to combat these challenges are
:-
1. INTERNET COMPUTING: Which offers businesses greater flexibility in achieving cheaper
computing with added flexibility.

2. MOBILE PHONES: Which is fast becoming the interface for everything and may soon surpass the TV
as the dominant mode of entertainment and media consumption.

3. WIRELSS AND SATELITE COMMUNICATION: Which offers cheap, diverse and reliable
telecommunication in the event of an unforeseen natural disasters. As seen in Haiti, Wireless
broadband and satellite links played a dependable role in the rescue, recovery and re-building of the
quake-hit Haiti. When terrestrial networks and even the undersea and other cable dependent networks
were knocked off by earthquake, only the Very Small Aperture Terminals (VSAT) remained the
dependable means of ISPs in Haiti

4. CONVERGENCE OF 4Cs (Collaborating, Communication, Community and Contact): Which has
resulted in the fussion of traditional collaboration tools with media and user contents to produce a
healthy mix of possibilities and given rise to phenomenal like facebook, youtube, twitter, wikis, etc.
E.C Arihilam and Eguzo C. V: Continental J. Information Technology 4:27 - 30, 2010


COMBATING CLIMATE CHANGE THROUGH RADIO COMMUNICATION
The use of radio spectrum for meteorological services aimed at weather, water and climate monitory and
prediction can be employed to fight climate change. Radio-based ICT applications such as remote sensors are
currently the main source of observation and information about the earth’s atmosphere and surface. Between
1980 and 2005, over 7,000 natural disasters worldwide took the lives of more than two million people and
produced economic losses estimated at over USD 1.2 trillion according to reports. Ninety percent of these
natural disasters, 72 percent of casualties and 75 percents of economic losses were caused by water, climate and
29

water related hazards such as droughts, floods, severe storms and tropical cyclones. Climate change monitoring
and disaster prediction mechanisms are therefore increasingly vital for our personal safety and economic well
being. Furthermore, both the ITU and WMO (World Meteorological Organization) should collaborate on the use
of radio based ICT technologies for weather, water and climate applications. Frequency bands required by
measuring instruments such as radars, satellite and radio sounds should be kept free from interference by other
users. We recommend that the ITU should allocate additional spectrum for observation systems involved in
monitoring climate change.

GeSi INITIATIVE
According to Global e-sustainability Initiative (GeSi), transformation in the way people and businesses use
technology could reduce annual man-made global emission by 15 percent by 2020 and deliver energy efficiency
saving to global business of over Euro 500 billion
(http://www.sustainablelifemedia.com/contents/story/greenIT/globalIT). The report showed that while as a
sector ICT has a footprint of two percent global emission this will almost double by 2020.

This is countered by the sectors unique ability to monitor and maximize energy efficiency both within and
outside its own sector and could cut CO
2
emission by up to five times this amount. This according to the report
represents a saving of 7.8 Giga-tones of carbon dioxide by 2020 greater than the current annual emissions of
either United State or China. The report added that Tele-working, Video-conferencing, e-paper and e-commerce
are increasingly common place, replacing such physical products and services with their virtual equivalents.
Below are some of the key findings of the Gesi Initiative which we believe by the time they become operational,
will help achieve low carbon economy.

THE SMART 2020: KEY FINDINGS
SMART GRIDS: Researches are going on, on how to develop and manage a smart grid for power supply. This
will only supply the required power and reduce losses to minimum. This when developed can apply to ICT
equipment and device to reduce power wastage.

According to Eggleston et al,[2002] IT equipment accounts for 26 percent of electricity consumption in an
office in California. This is also applicable to cities in developing countries where heat generating equipment are
still in use and the power required to provide appropriate cooling systems for them is enormous thus
contributing to ineffective energy management. Office and networking equipment generate heat and require
proper cooling to function effectively. Other big smart 2020 opportunities include:

SMART MOTOR SYSTEMS: Reducing electricity consumption in industry through optimized motors and
automations.

SMART LOGISTICS: Improving the efficiency of transport and storage. Eg. Solar powered automobiles.

PROLIFERATION OF ICT INFRASTRUCTURE
Arguably, most ICT/Telecommunication infrastructure could well be shared among operators to reduce their
impact on climate change. Some of these electronic ICT/Telecommunication infrastructure that could be shared
include base tower stations, transceivers for signal processing and transmission. Just as it reduces negative
environmental impact through sharing of antenna farms to limit radiation, incessant digging up of roads and
pavement, as well as proliferation of ICT/Telecommunication masts, infrastructure sharing enables operators to
focus more o service innovation instead of network deployment. For now, service providers should be told that
Nigeria has gone beyond the stage of initial deployment of ICT just for them to make quick returns because we
want to connect. Then it was
E.C Arihilam and Eguzo C. V: Continental J. Information Technology 4:27 - 30, 2010


each service provider deploying independent mast equipments and switches with the result that the whole land
appeared to have been littered with these masts some of which were put up in such a hurry thus ignoring certain
obvious engineering specification.
Collocation is what we should be thinking about now that these infrastructures are impacting on our
environment in diverse ways. In addition, we should be thinking of other environmentally friendly technologies
to boast our growth in ICT such as satellite and wireless technologies.

CONCLUSION
30

As we look forward towards the growth of ICT to about 4 billion by 2020 with Nigeria already having over 67
million mobile subscriber base within nine years of telecommunication boom, it is crucial to continue to
recognize that ICT tools have become imperative for the new socially networked generation to be used to
collectively reduce global ICT emission. Further sustainable growth is possible by adapting trends like
virtualization of data centers, long life devices, smart chargers, next generation networks and growth of
renewable energy consumption by encouraging solar powered base stations. ICT tools could help manage and
reduce emission in its own sector especially from data centers, telecommunication networks etc. Industry
regulators like Nigerian Communication Commission (NCC) and industry interest groups like Association of
telecom companies of Nigeria (ATCON) should ensure that the industry adheres to type-approved inventions.

REFERNCES.
Bankole Orimisan (2010, June 17); Radio communication for meteorology to combat climate change, Guardian
Newspaper Ltd. Pg 25 and 37

Climate change (2010); Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Climate_change April
2010

DailyTech (2010) Microsoft Launches African “Green Laptop Initiative”,
http://www.dailytech.com/Microsoft+launch+African+Green+laptop

Eggleston K, R.Jenson and Zechhauser (2002); Information and Communication Technologies, Market and
Economic Development Discussion Paper series, 0203, Department of Economics Torfts University.

Gartner (2005); Green IT, The new industry Shockwave presentation of symposium/ITXPO conference.
http://www.greenhouse.gov.au/inventory

Global IT Group Study on Climate Change Impact and Sustainable life media
http://www.sustainablelifemedia.com/contents/story/greenIT/globalIT

ICT and Climate Change http://www.itu.int/net/itunews/issues/2009/08/20.aspx

Sander K. and Schilling S. (2010); Transboundary Shipment of Waste Electrical and Electronic Equipment
scrap – Optimization of material flows and control. Federal Environmental Agency, Germany.

Received for Publication: 12/04 /2010
Accepted for Publication: 02/05 /2010

Corresponding author
Eguzo C. V
Electrical/Electronic Engineering Department, Akanu Ibiam Federal Polytechnic, Unwana, Ebonyi State,
Nigeria
Email:[email protected]


















31

Continental J. Information Technology 4: 31 - 41, 2010 ISSN: 2141 - 4033
© Wilolud Journals, 2010 http://www.wiloludjournal.com

PREDICTION OF A COMMUNICATION SATELLITE DRIVEN BY WHITE NOISE SEQUENCE
(A DETERMINISTIC APPROACH TO LINEAR SIGNAL FILTERING)

E. O. Oyediran,

J. J. Biebuma, and M.O. Deinbo-Briggs
Department of Electrical/Electronic Engineering, University of Port Harcourt, Nigeria.

ABSTRACT
This paper presents a qualitative evaluation of a communication satellite driven by a white noise
sequence, and proposes a filtering solution to predict the dynamics of the satellite. The prediction of the
position and velocity of the communication satellite though being perturbed by the effect of solar array,
gravity waves and drag resistance, was achieved using a least square algorithm based filter to determine
an unbiased estimate. The dynamics of the satellite assumed linear, were examined from three broad
classifications viz a viz: Parametric estimation (Least Mean Square method), Noise in communication
system and the Kalman Filter. The significant practical benefits of using the MATLAB program for the
Kalman filter blockset in the simulations were established.

KEYWORDS: Communication Satellite, White Noise, Least Mean Square, Unbiased estimates,
Kalman filter

INTRODUCTION
Filtering is desirable in many situations in engineering and embedded systems. Radio communication signals are
often corrupted or heavily correlated with noise. A good filtering algorithm can remove the noise from
electromagnetic signals while retaining the useful information (Simon, 2001). For many space applications
involving data communication, a large number of satellites have been launched into the earth orbit, for
effectiveness in their operation, the attitude must be observable and the dynamics predictable. It is possible to
obtain a time history of the attitude of a communication satellite by suitable instrumentation and telemetry
(Ekejiuba, 1986). A near-earth satellite, for example, orbiting in the attitude range of km 150 to km 450
encounters small but non-negligible aerodynamic forces due essentially to the characterization of drag
resistance, gravity waves in stably stratified atmosphere especially affects free and forced rotation of
astronomical satellites(Sohn, 1995). The influence of major environmental forces on the attitude response of
gravity gradient satellites using both analytical and numerical techniques presents a problem of major interest to
the telecommunication engineer, This is a game theory approach; Nature tries to maximize the estimation error,
the engineer tries to minimize the error (Simon, 2008). This work models the dynamics of a communication
satellite driven by a white noise sequence, and proposes a finite-time signal filtering solution using the Kalman
filter, in which state estimation was addressed. Theoretically, the Kalman filter is an estimator for what is called
the linear-quadratic problem, which is the problem of estimating the instantaneous “state” of a linear dynamic
system perturbed by white noise—by using measurements linearly related to the state but corrupted by white
noise. The resulting estimator is statistically optimal with respect to any quadratic function of estimation error.
Practically, the Kalman filter is one of the greatest discoveries in the history of statistical estimation theory and
possibly the greatest discovery in the twentieth century. It has enabled humankind to do many things that could
not have been done without it, and it has become as indispensable as silicon in the makeup of many electronic
systems. Its most immediate applications have been for the control of complex dynamic systems such as
continuous manufacturing processes (Grewal and Andrews, 2008). The Kalman filter provides an efficient
computational means to estimate the state of a process, in a way that minimizes the mean of the squared error. It
supports estimations of past, present, and even future states, and it can do so even when the precise nature of the
modeled system is unknown. Our approach determines the best estimate: the measured values of the input and
output of the system are required, where by the measured error covariance and the estimated error covariance
must be approximately same or unbiased.

Statement of the Problem
[CGWIC 2007]The NigComSat-1 was the first African geosynchronous communication satellite, when it was
launched into orbit, aboard a Chinese Long March 3B carrier rocket, from the Xichang Satellite Launch Centre
in China. The satellite, which is the third Nigerian satellite to be placed into orbit, was launched into a
geosynchronous transfer orbit and subsequently it was successfully inserted into a geosynchronous orbit,
positioned at 42.5
o
E on 13 May 2007. It had a launch mass of 5,150 kg, and had an expected service life of 15
year. The satellite, designed to operate in Africa, parts of the Middle East and southern Europe, is to serve
mobile phone users, facilitate internet performance in remote regions and improve overall
32

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


telecommunications.The spacecraft was operated by NigComSat and the Nigerian Space Agency, NASRDA, it
is been monitored and tracked by a ground station built in Nigeria's capital Abuja .On 10 November 2008 (0900
GMT), the satellite was reportedly switched off for analysis and to avoid a possible collision with other
satellites. According to Nigerian Communications Satellite Limited, it was put into "emergency mode operation
in order to effect mitigation and repairs"(AFP. 2008). On November 11, 2008, NigComSat-1 failed in orbit after
running out of power due to an anomaly in its solar array. The need to predict and achieve an unbiased estimate
of the actual path and position of a communication satellite is of great concern, hence this study.

Significance of Study
This study will no doubt be of immense benefit in the tracking a communication satellite as it helps to predict
the present, and future position of the satellite at any required time. Since received signals are heavily correlated
with noise, filtering is desirable in other to control a dynamic system, for this application, it is not always
possible or desirable to measure every variable that you want to control, Kalman filter is used for predicting the
likely future courses of dynamic systems that people are not likely to control, it provides a means for inferring
the missing information from indirect and noisy measurements also giving the best estimate of the signal output
thereby improving Signal to Noise ratio and Efficiency which are the desires of the telecommunication
Engineer.

The State Estimation Problem
The procedure employed in least squares estimation of a process is usually carried out for a sequence of
different order process and noise models until the best and simplest possible model is obtained. The more
general estimation problem can be formulated on the basis of maximum likelihood and Bayesian techniques
(Athans, 1971; Kalman, 1960) using statistical information in terms of joint probability distribution functions.
However, for the linear dynamic system with additive, zero-mean white Gaussian measurement noise defined in
terms of mean values and variances, which will be appropriate for many practical problems, the least-squares
solution formulated as a deterministic problem with appropriate weighting leads to the maximum likelihood
estimate. (Oyediran, 2010) Computation of the optimal estimates should consequently rely on convergence of
the iteration employed; this is accomplished through sequential filtration of the estimate.

Here, we consider a time-series model of the form
( ) j x
∑ ∑
= =
− = − +
m
i
m
i
i i
i j u b i j x
1 0
) ( ) ( α (1)
The partial fraction expansion required for obtaining a time solution to (1) is defined for distinct roots using the
method of residues, as

∑∑
= =

=
k
v
m
k
k
v
vk
s
s Y
1 1
) (
) (
σ
σ
α
(2)
where
k
α α ...
1
are the poles of the function ) (s Y with multiplicities
k
m m ...
1
.

The inversion integral was then written in the form


∑∑
∞ +
∞ −
= =


= =
i
i
k
v
m
k
t k vk st
v
v
e t
k
C
ds e s Y
i
t y
σ
σ
σ
π
1 1
1
)! 1 (
) (
2
1
) ( , 0 > t (3)
so that the function ) (s Y with contaminated noise becomes
noise white s U
s
s Y
n
i i
i
+






+
=

=
) ( ) (
1
α
β
(4)
where
i
α ,
i
β are the parameters of the system, and n is the order of the system
Using the shift operator
1 −
z defined by ) 1 ( ) (
1
− =

j x j x z , equation (1) becomes
) (
) (
) (
) (
) (
) (
) (
1
1
1
1
k e
z A
z c
k u
z A
z B
j y




+ = λ , N k ..., , 1 = (5)

33

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


where ) (k y is the observed output signal, ) (k u is the applied input signal, N is the number of samples and
) (t e is the noise sequence. The polynomial operators A, B, and C are defined as follows


=
− −
+ =
NA
i
i
i
z z A
1
1
' 1 ) ( α



=
− −
=
NB
i
i
i
z b z B
1
1
' ) (


=
− −
+ =
NC
i
i
i
z c z C
1
1
' 1 ) ( (6)
If the continuous model in (4) is discretized we shall obtain a discrete time model of the form


=


+
+
=
n
i
i
i
k e k u
z a
z b
k y
1
1
1
) ( ) (
1
) ( λ (7)
where the parameter set a, b is related to the parameter set α , β via the relation
) exp( T a
i i
α − − =
and )) exp( 1 ( T b
i
i
i
i
α
α
β
− − = (8)
where T is the sampling interval in seconds. The infinite noise of the discrete observations due to aliasing that
may result from the above discretization process is negligible. This is justified if ) (k u and ) (s e are
independent of all k and s. This is a reasonable assumption as long as the identification is performed for data
acquired from experiments, where ) (k u is a priori known sequence. In some practical situations, this
assumption is often violated when operating records are used because in such a case the input may depend on
the output through feedback.

The canonical model in (5) was made equivalent to the discrete-time model of (7) by satisfying the conditions
(i)
i i
a c ' ' = (9)
(ii)

=




+
=
n
i
i
i
z a
z b
z A
z B
1
1
1
1
1
1 ) (
) (


The statistical method of maximum likelihood was employed to optimize the probability of obtaining the
expected result. Consequently, a loss function ) (θ V was defined as follows


=
=
N
k
k V
1
2
) (
2
1
) ( ε θ (10)
which will be minimized with respect to the system parameter set [ ] ' , ' b a = θ . The residues were defined by
) (
) (
) (
) ( ) (
1
1
k u
z A
z B
k y k


− = ε (11)
so that the values of the parameter set a’ and b’ that make ) (θ V in (10) minimum will be the estimates of the
parameters of the system. The approach considered is one of finding the coefficients of the prediction model

( )
( )
( )
( ) ( )
( )
( ) k y
z c
z A z C
k u
z C
z B
k
k
y
1
1 1
1
1
1

− −



+ = |
¹
|

\
|

)
(12)
so that the mean square prediction error
( ) ( ) ( )
∑ ∑
= =
=
|
¹
|

\
|

− =
N
k
N
k
k
k
k
y k y V
1 1
2
2
1
ε θ
)
(13)

34

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


is as small as possible. By doing so the assumption of Gaussian distribution of the noise sequence ( ) k ε may be
relaxed. In the model given in (7), the residues were obtained as
( ) ( ) ( ) t u
z a
z b
k y k
n
i
i
i
|
|
¹
|


\
|
+
− =

=


1
1
1
1
ε (14)
For system order greater than n, ( ) k ε as given in (14) becomes computationally difficult and highly nonlinear,
whereas in the model of (7), the residues given by (11) may be computed quite easily for any given system
order. Since the two models are equivalent via the transformations defined in (9), we are free to use any of them.
Thus the form of residues given in (11) is recommended and used in this study for parameter estimation, and
application of a filter for signal noise attenuation becomes evident.

Measurement Noise Filtration
The relationship between Information, Bandwidth and Noise
Rate
(15)
Signal to Noise Ratio
(16)
(Jianfeng, 2007) There is a theoretical maximum to the rate at which information passes error free over a
channel. This maximum is called the channel capacity, C. The famous Hartley-Shannon Law states that the
channel capacity, C is given by

(17)
For example, a 10kHz channel operating at a SNR of 15dB has a theoretical maximum information rate of
10000 log
2
(31.623) = 49828b/s.

METHODS
Figure 1 the block design, it shows a model that has three main functions. It generates the position, velocity, and
acceleration in polar (range-bearing) coordinates; it adds measurement noise to simulate inaccurate readings by
the sensor; and it uses a Kalman filter to estimate position and velocity from the noisy measurements.

a

35

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


b

Figure 1(a and b): Block diagram of the study design {Kalman filter tracking of the ComSat}

The model is designed with the function kalman, having the following input data below
1. Kalman Filter
1. Range-bearing to XY
2. XY Acceleration Model

Table 1: Showing Simulation Parameter
Simulation Parameter Value
Solver FixedStepDiscrete
RelTol 1e-3
AbsTol 1e-6
Refine 1
MaxStep 0.1
MaxOrder 5
ZeroCross On

Table 2: Band-Limited White Noise. Block Properties
Name Cov Ts Seed
Measurement Noise [2 2] 0.1 [52341 62341]
Random satelitte Motion [0.001 .001] 0.1 [58413 74925]


Table 3: Gain Block Properties
Name Gain Multiplication
Param
Min
Param
Max
Param Data Type
Str
Out
Min
Out
Max
Out Data
Type Str
Meas.
Noise
Intensity
[300
0.01]
Element-
wise(K.*u)
[ ] [ ]
Inherit: Same as
input
[ ] [ ]
Inherit: Same
as input

Table 4: Showing Block Types and Block Names
BlockType Count Block Names
SubSystem 2 Kalman Filter, XY Acceleration Model
Outport 2 Est. Position [x, xdot, y, ydot], Residuals
DiscreteIntegrator 2 XY Position, XY Velocity
Band-Limited White Noise.
(m)
2 Measurement Noise, Random satelitte Motion
xy2rtheta (m) 1 XY to Range-Bearing
Sum 1 Sum
Gain 1 Meas. Noise Intensity
36

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010



Table 5: Model Variables Showing Initial Value
Variable Name Parent Blocks Calling string Value
Speed
XY Position
XY Velocity

[0,Speed]
[0,Speed]

400

First we specify the satellite model with the process noise.
Consider the discrete linear system with additive Gaussian noise w
k
on the input u
k
data.
(18)
(19)

We want to use the available measurements y to estimate the state of the system x. We know how the system
behaves according to the state equation, and we have measurements of the position, so how can we determine
the best estimate of the state x? We want an estimator that gives an accurate estimate of the true state even
though we cannot directly measure it. For this system w is the process noise and z is the measurement noise. We
have to assume that the average value of w is zero and the average value of z is zero. We have to further assume
that no correlation exists between w and z. having in mind the properties of a white noise. That is, at any time k,
w
k
, and z
k
are independent random signal variables. Then the noise covariance matrices S
w
and S
z
are defined as

Process noise covariance
(20)
Measurement noise covariance
(21)

where w
T
and z
T
indicate the transpose of the w and z random noise vectors, and E(· ) means the expected value.
There are many alternative but equivalent ways to express the Kalman filter equations. One of the formulations
is given as follows:

(22)
(23)
(24)
The K matrix is called the Kalman gain, and the P matrix is called the estimation error covariance. The aim is to
estimates the output y
k
given the inputs u
k
and the noisy output measurements


where z
k
is some Gaussian white noise.
The first task during the measurement update is to compute the Kalman gain, The next step is to actually
measure the process to obtain , and then to generate an a posteriori state estimate by incorporating the
measurement. The final step is to obtain an a posteriori error covariance estimate.

After each time and measurement update pair, the process is repeated with the previous a posteriori estimates
used to project or predict the new a priori estimates. This recursive nature is one of the very appealing features
of the Kalman filter—it makes practical implementations much more feasible than an implementation of a

37

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


Wiener filter which is designed to operate on all of the data directly for each estimate. The Kalman filter instead
recursively conditions the current estimate on all of the past measurements.(Welch and Bishop 2006)
The equations of the steady-state Kalman filter for this problem are given as follows.

Measurement Update
x
(k)
= x
(k-1)
+ P
k
(y
k
– Cx
(k-1)


Time Update


In these equations:
• x
(k-1)
is the estimate of x
(k)
given past measurements up to y
k-1

• x
(k)
is the updated estimate based on the last measurement

Given the current estimate, x
(k)
the time update predicts the state value at the next sample k+1 (one-step-ahead
predictor). The measurement update then adjusts this prediction based on the new measurement y
k+1
. P is the
correction term is a function of the innovation, that is, the discrepancy.between the measured and predicted
values of y
k+1
. The innovation gain K
k
(Kalman gain) is chosen to minimize the steady-state covariance of the
estimation error given the noise covariances. This filter generates an optimal estimate of y
e
.
Note that the filter state is x
(k-1)


Example: Given that
A =
3.1270 -0.9430 339
1.0000 0 0
0 1.0000 0

B =
-0.7660
0.8190
0.7330

C =

1 0 0

D = 0

Now we can construct a state-space model of the block diagram with the functions parallel and feedback. We
build a complete mathematical model with u,w and as inputs y and y
v
(measurements) as outputs.
a = A;
b = [B B 0*B];
c = [C;C];
d = [0 0 0;0 0 1];
P = ss(a,b,c,d,-1,'inputname',{'u' 'w' 'v'},...
'outputname',{'y' 'yv'});

To simulate the filter behavior we generate a sinusoidal input u and process and measurement noise vectors w
and v.
t = [0:100] time is from 0 to 100secs, u = sin(t/15), Sw = 3.2, Sz = 1, n = length(t) =101
randn('seed',0), w = sqrt(Sw)*randn(n,1), v = sqrt(Sz)*randn(n,1)
we simulate to get
[out,x] = lsim(SimModel,[w,v,u]);
y = out(:,1); true response, ye = out(:,2); filtered response, yv = y + v; measured response


38

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


The Kalman Gain K =

1.0e+005 *
4.5195 0.0000 0.0065
0.0000 0.0000 0.0000
0.0065 0.0000 0.0000

0 10 20 30 40 50 60 70 80 90 100
-2
0
2
4
6
x 10
91
No. of samples
O
u
t
p
u
t
Kalman filter response
0 10 20 30 40 50 60 70 80 90 100
-2
-1
0
1
2
No. of samples
E
r
r
o
r

Figure2: Kalman Filter Response (output and error)

Measured Error is
MeasErr = y-yv;
Measured Error Covarianceis given by
MeasErrCov = sum(MeasErr.*MeasErr)/length(MeasErr);
Estimated Error is
EstErr = y-ye;
Estimated Error Covariance is given by
EstErrCov = sum(EstErr.*EstErr)/length(EstErr);
Measured Error Covariance is =0.1657
Estimated Error Covariance is =0.1657
Note since the measured error covariance and the estimated error covariance have same value we are sure of
determining an unbiased estimate of the position of the Communication satellite.

Simulation of Position of the Communication Satellite
Initial Velocity Mismatch
The Kalman Filter block works best when it has an accurate estimate of the satellite position and velocity, but if
given time it can compensate for a bad initial estimate.

To see this,
Given Initial condition for estimated state parameter in the Kalman Filter as follows,
Initial Satellite E-W estimate Position, with initial velocity of 400 in the Y direction,
time t = [0 to 100]seconds at intervals of 10s, we have the result in figure 3 below:












39

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010



Figure 3 Initial Satellite E-W estimate Position, with initial velocity of 400 in the Y direction

RESULTS AND ANALYSIS
Estimation of the position and velocity is performed by the ‘Kalman Filter' subsystem. This subsystem samples
the noisy measurements, converts them to rectangular coordinates, and sends them as input to the Signal
Processing block set, Figure 1b
The Kalman Filter block produces two outputs in this application.
• An estimate of the actual position. This output is converted back to polar coordinates so it can be
compared with the measurement to produce a residual,or error( the difference between the estimate
and the measurement). The Kalman Filter block smoothes the measured position data to produce its
estimate of the actual position.
• The second output from the Kalman Filter block is the estimate of the state of the satellite, In this case,
the state is comprised of four numbers that represent position and velocity in the X and Y coordinates.

Changing the estimate from 400 to 100 and running the model again, we have


Figure 4: Showing Velocity in Y direction=100

It is observed that the range residual is much greater and the 'E-W Position' estimate is inaccurate at first.
Gradually, the residual becomes smaller and the position becomes more accurate as more measurements are
gathered.

Simulation by increasing the Measurement Noise Estimate
In the present model, the noise added to the range estimate is rather small compared to the ultimate range. The
maximum magnitude of the noise is 300 ft, compared to a maximum range of 40,000 ft. Increasing the
magnitude of the range noise to a larger value, for example, 5 times this amount or 1500 ft. by changing the first
component of the Gain parameter in the 'Meas. Noise Intensity' Gain block.



40

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010



Figure 5: Poor Prediction

We observe that the blue lines representing the estimated positions have moved farther from the red lines
representing the actual positions, and the curves have become much more 'bumpy' and 'jagged'. We can partially
compensate for the inaccuracy by giving the Kalman Filter block a better estimate of the measurement noise.
Setting the Measurement noise covariance parameter of the Kalman Filter block to 1500 and running the model
again.

Figure 6 Showing Unbiased Estimate

ANALYSIS AND DISCUSSION
• From Figure 2 the covariance plot, one can see that the output covariance did indeed reach a steady
state in about five samples., the Covariances arrived at the same value, being the same, that means that
estimated output of our kalman filter is unbiased, thereby achieving the aim of the study.
• From figure 5, It is noted that when the measurement noise estimate is better, the E-W and N-S
position estimate curves become smoother. This is the expected behavior.

CONCLUSION AND RECOMMENDATION
Conclusion: For communication satellites launched in the orbit, the intricacy of predicting its position in the
orbit is high due to the effect of gravity waves, drag resistance, solar arrays etc, which give rise to erroneous
range detection of the space craft. The kalman filter has profer a solution to the mentioned problems, it is
important to state here that it not only works well in practice, but it is theoretically attractive because it can be
shown that of all possible filters, it is the one that minimizes the variance of the estimation error.

RECOMMENDATION
It is recommended that the Kalman filter is being used to filter noise in order to track or predict linear systems.
Many physical processes, can be approximated as linear systems and can be predicted e.g. vehicle driving along
a road, a motor shaft driven by winding currents, or a sinusoidal radio-frequency carrier signal.






41

E. O. Oyediran et al.,: Continental J. Information Technology 4:31 - 41, 2010


REFERENCES
AFP (2008): ''Technicalproblems'shutdownNigeriansatellite"http://afp.google.com/article, Accessed on 2
nd

September 2009

Athans, M., (1971): The role and use of the stochastic linear quadratic Gaussian problem in control system
design, IEEE Trans. Automatic Control AC-16, 529 – 552.

CGWIC (2007): "CGWIC News". China Great Wall Industry Corporation (CGWIC)
http://www.cgwic.com/InOrbitDelivery/CommunicationsSatellite/Program/NigComSat-1.html Accessed on 2
nd

September 2009.

Dan, Simon (2003): Modeling and Analysis of Signal Estimation for Stepper Motor Control Cleveland State
University, 5-8

Dan, Simon (2001): Embedded Systems Programming Cleveland State University, 72-74

Ekejiuba, A. O. (1986): Locating By-passed Hydrocarbon Accumulation in Old Reservoirs, Nigerian
Association of Petroleum Explorationists, Fourth Annual Conference.

Flanagan. R. C., (1969): Effects of Environmental forces on the attitude dynamics of Gravity Oriented
Satellites, Ph.D. Thesis, University of British Columbia.

Greg Welch and Gary Bishop (2006): An Introduction to the Kalman Filter TR 95 University of North Carolina
at Chapel Hill, Chapel Hill, NC 27599-3175

Grewal M. S. and Andrews A. P. (2008): Kalman Filtering: Theory and Practice using MATLAB, Third
Edition. John Wiley & Sons, Inc

Jianfeng F. (2007): Digital Communications and Signal Processing- with MATLAB examples; University of
Warwick CV4 7AL, UK 15-30

Kalman, R. E., (1960): A new approach to linear filtering and prediction problems, Trans. ASME J. Basic Eng.,
82, 35 – 45.

Oyediran, E. O., (2010): Finite-time Linear Filtering and Prediction of a Communication Satellite Driven by
White Noise Sequence, M.Eng. Thesis,University of Port Harcourt, Choba, Rivers State, Port Harcourt.

Oyediran, E.O., J.J. Biebuma and E.C. Obinabo, (2010): A deterministic approach to process noise attenuation
in a communication satellite driven by white noise sequence. J. Eng. Applied Sci., 5: 72-77.

Sohn. R. L., 1995, Attitude Stabilization by Means of Solar Radiation Pressure, ARS Journal, 29 (5): 371 – 373.

Received for Publication: 12/10 /2010
Accepted for Publication: 02/11 /2010

Corresponding Author
E. O. Oyediran,
Department of Electrical/Electronic Engineering, University of Port Harcourt, Nigeria.
Email: [email protected]








42







Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close