(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010
Load Balancing in Distributed Computer Systems Ali M. Alakeel College of Computing and Information Technology University of Tabuk Tabuk, Saudi Arabia Email:
[email protected]
the following goals: (1) Optimal overall system performance-maximum total processing capacity at acceptable delays; (2) Fairness of service--jobs should be serviced equally regardless of their origin; and (3) Failure tolerance--keeping the system performance perform ance at an acceptable acceptable level in the presence of partial failures in the system [4].
Abstract—Load balancing in distributed computer systems is the Abstract—Load process of redistributing the work load among processors in the system to improve improve system performance. Trying to accomplish this, however, is not an easy task. In recent recent research and literature, various approaches have been proposed to achieve this goal. Rather than promoting a specific specific load balancing policy, thi thiss paper presents and discusses different orientations toward solving the problem.
For the purpose of this presentation we assume the configuration shown in Figure 1. This configuration is a one possible arrangem arrangement ent of a distributed distributed homogenous computer system which has N processors connected through a computer network. In this configuration, processors assumed to be of the same type and have the same power. Tasks (jobs) are individually independent of each other, and can be processed by any processo processor. r. Moreover, Moreover, tasks arrive at each processor P j
Keywords-distributed systems; load balancing; algorithms; performance evaluation
I.
I NTRODUCTION
Since the introduction of parallel computers, the main objective has been to allow more than one computer to cooperate in solving the the same p problem. roblem. Obviously, distributin distributing g a work load equally between equally capable processors should give the best results. results. Theo Theoreti reticall cally y speaking, speaking, N computer computerss should spend 1/Nth of the time a single computer spends in solving the same problem probl em [1].
has an arrival rate λ j and a First Come-First-Serve (FCFS) queue discipline is assumed for all queues in the system under consideration.
λ1
Unfortunately, cooperation by itself is not enough and could be potentially disastrous due to such factors as the communication overhead between processors and work imbalance distribution among processors. Having some processors doing m more ore or less less work than oth others ers wil willl degrade degrade the overall performance of the system and, in the worst case, will cause some the multiprocessors to do nothing at all in reducing the time to solve the problem. Since all processors have to communicate with each other at some point during their computation, a lightly loaded processor will spend most of its
P1
λ2
λk
P2
Pk
Computer Network
time waiting for a subsequent result from an overloaded processor. Consequen Consequently, tly, we find that most of the processor's time is spent in waiting rather than doing useful computation as intended. Load balancing, the process of distributing the work fairly among participating processors, is a sub-problem of a bigger dilemma: distributed scheduling. Distributed scheduling is composed of two parts: local scheduling, which takes care of assigning processing resources to jobs within one node, and global scheduling, which determines which jobs are processed by which processor. processor. Load balancing balancing is a vit vital al ingredient ingredient in any acceptable global scheduling policy. The aim of load balancing is to improve system performance by preventing some processors from being overwhelmed overwhelmed with work while others find no work to do.
Pk+1
λK+1
Achieving the best load balance in a distributed system is not an easy task. A good load load balancing policy should consider
PK+2
λK+2
P N
λ N
Figure 1. An Example Distributed Computer System Configuration
8
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010
Drafting, Threshold, and and Greedy. All presented strategies try to minimize the response time of each process. A comparison of the strategies will be used to demonstrate their relative uses and effectiveness. The bidding strategy is compared with the drafting strategy based on their similarities more than their differences. The way each of them approaches the problem is presented, presente d, in addition to identify identifying ing the major parameters parameters and properties properti es of each. Communication Communication overhead is used to discriminate between the performances of each strategy.
Several attempts have been made in the load balancing field, yet with different approaches and orientations. Two different approaches have been taken by researchers in their attempts to achieve load balance in distributed systems: Static and Dynamic. In the static approach, enough information about the status of all processors in the system is assumed before distributing the work load among them. Once tasks have been assigned to run at specific processors, however, this assignment is final and cannot be changed regardless of any changes occurring later in the system [4], [7], [10], [11], and [18]. Static strategies may be either deterministic deterministic or probabil probabilistic. istic. A deterministic deterministic strategy assigns tasks to processors based on a fixed criterion, while a probabilistic strategy uses probability values when the assignment is made. For example the load balancing policy that transfers extra tasks from processor A to processor B all the time is deterministic, while the policy that transfer extra tasks from processor A to processor B with probability 0.7 and to processor C with with probability 0.3 is a probabilistic one [5]. A static solution's ignorance of system workload fluctuations in decision making is a major disadvantage. On the other hand, static algorithms are easier to work with and analyze [4], [5], and [18]. Several static algorithms have been developed and implemented, some of which can be found in [12]-[14], and [19]-[22].
Similarly, the threshold strategy is compared with the greedy strategy based on their similarities. Since both strategies assume the same amount of information to reach the same objective, their performance will be compared in addition to presenting the features of each strategy. A. Bidding Strategy
The main concept of this approach is bids. The overloaded processor looking for help in executing some of its tasks requests other processors to submit their bids. Bid information includes the current work load status about each processor. After receiving all bids from participating processors, the original processor selects to whom it will send some of its tasks for execution. A m major ajor drawback of this strategy is the possibility possibil ity that one processo processorr will become overloaded as a result of its winning many bids. To overcome this problem, some variations of or thisreject strategy wouldsent allow the original bidder processor to accept the tasks by the processor. This could be done by allowing it to send a message to the original processor informing it of whether the work has been accepted or rejected. rejected. Since a processor's processor's load could change while these messages take place, the final selection might not turn out to be as good as it seems to be at earlier time or vice versa [8]. Different algorithms have been proposed in the literature to determine who gets to initiate the bid, bid information, bid selection, bid participation, and bid evaluation [4], [7]-[11].
This paper concentrates almost approach completely thebalancing. dynamic approach, due to its more realistic to on load The goal of this paper is not to advance a specific dynamic load balancing policy, policy, but rather to address the probl problem em and present different approaches that have been used to develop a solution for it. Section II presents dynamic load balancing strategies and Section III identifies different methods depicting the assignment of responsibility for conducting load balance. Section IV presents different solutions a load balance strategy could yield, Section V identifies some techniques used to model and analyze a load balance strategy, and Section VI concludes the paper. II.
The performance of this strategy depends on the amount of information exchanged, the selection of bids, and communication overhead [4]. More information exchange enhances the performance and provides a stronger basis for selection but also requires extra communication overhead [4],
DYNAMIC LOAD BALANCING
The dynamic approach looks at the load balancing problem more realistically by assuming little information is available before any assignm assignment ent is made. It does not presume any knowledge of where a certain task will finally execute or in what environment. Dynamic load balancing algorithms monitor changes on the system work load and redistribute the work load accordingly [4], [7], [10], and [11]. Although the dynamic load balancing approach alleviates the drawbacks of the static approach, it is harder to work with and analyze [4]. Several dynamic algorithms, e.g., [2], [3], [15]-[17], [23]-[26], have been developed and implemented.
[6], and [8]. Examples of this strategy can be found in [27][30]. B. Drafting Strategy
The drafting strategy differs from the bidding strategy in the way it allows process migration and in the manner it attempts to achieve load balance. The drafting policy tries to alleviate some of the communication overhead introduced by the bidding strategy. The drafting policy achieves load balance by keeping all processor processorss busy rather than evenly distributing distributing the work load among participating processors (which is one of the objectives of the bidding strategy). In the bidding strategy, to keep all processors evenly loaded, groups of processes will be required to migrate from a heavily loaded processor to a lightly loaded processor. Consequently, it is possible to find some of these processes migrating back as a result of the unpredictable change of the processor's work load. To allow
Research has shown that the dynamic approach outperforms the static approach and yields better system performance performa nce [5]. This section identifies some, not necessarily all or the best, dynamic load balancing strategies that have been reported in literature. It has to be emphasized that the selection of these specific strategies is to serve for presentation purposes only and not to be interpreted as an exhaustive selection or classification. Four strategies will be presented: Bidding,
9
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010
C. Threshold Strategy
for this problem in the bidding approach, the drafting strategy allows only one process to migrate at a time rather than group migration [8].
In this type of load balancing algorithms, a threshold value T is used to decide whether a task is executed locally or remotely. The threshold is the processor's queue length. The queue length is the number of processes in service plus the number of processes waiting in the queue. Threshold value assumes static value in an implementation of the algorithm [4] and [5].
The drafting strategy adopts a process migration policy which is based on giving the control to the lightly loaded processors. processor s. Lightly loaded processor processorss initiat initiatee a process migration instead of having process migration being triggered by a processor being overloaded overloaded as in the bidding strategy [8]. In drafting, the number of processes currently at the processor is used for work load evaluation. Each processor maintains maintains its work load and identifies itself as in one of the following states: H-Load, N-Load, or L-load. An H-Load, heavy load, indicates that some of this processor's processes can migrate to other processors. processor s. An N-Load, normal normal load, indicates that ther theree is no intention for process migration. A L-Load, light-load, indicates that this processor is willing to accept some migrant processes. A load table is used at each processor to hold this information about others processors and act as a billboard from which the global information of the system is obtained. When a load change occurs in a processor, it will broadcast its load information to other nodes so that they will update their load tables.
In this strategy, a processor will try to execute a newly arriving unlessthis thisprocessor processor will threshold been reached. task In locally this case, selecthas another processor randomly and probe it to determ determine ine if ttransferr ransferring ing the task to the probed processor will place it above the threshold or not. If it does not, then the task is transferred to the probed processor which has to execute the task there without any attempts to transfer it to a third one. If it does place it above the threshold, then another processor is selected in the same manner. This operation continues up to a certain limit called the probing limit. After that, if no destination processor is found, the task is executed locally [5]. It should be noted that the threshold strategy requires no exchange of information among processors in order to decide whether to transfer a task or not [5]. This is an advantage because it minimizes communication overhead. Another advantage is that the threshold policy avoids extra transfer of tasks to processors that are above the threshold. It has been shown, in [5], that the threshold algorithm with T=1 yields the best performance for low to moderate system load, and the threshold algorithm with T=2 gives the best performance for high system load.
As the processor becomes lightly loaded, i.e. L-Load, it will identify other processors having the status of H-Load from its load table and send them a draft-request message. This message indicates processor willingit to accept more work. that If bythe the drafting time it receives thisismessage is still in H-Load, each remote (drafted) processor will respond by sending a draft-respond message which contains a draft-age information. Otherwise the current load status will be returned to the drafting processor, adopting the concept that a process is allowed to migrate only if it is expecting a better response time and age is associated with with each draftable process. Some of the parameters paramete rs that may be used for age age determination determination are: P Process rocess waiting time, process priority, or process arrival [8].
D. Greedy Strategy
In this strategy, the current state of each processor P represented by a function f (n) (n) , where n is the number of tasks currently at the processor. If a task arrives at P and number of tasks n is greater than zero, then this processor looks for a remote processor that has its state less than or equal to f (n). (n). If a remote processor is found with this property, then the task is transferred there. The performance of this strategy depends on the selection of the function f (n). (n). It has been shown in [6] that f (n) (n) < n must holds in order to achieve good performance. Also, n-1, n div 2, n div 3, n div 4, etc. are possible values for
The draft-age is determined by the ages of those processes nominated to be drafted. Various alternatives for draft-age calculations are possible. The selection of the draft age to be the maximum age of all draftable processes, the average age of the draftable processes, or simply the number of draftable processes are some some of them [8]. When all draft-r draft-response esponse messages are received, the drafting processor calculates draftstandard criteria. Draft-standard criteria are calculated based on the draft-ages received and used to ensure fairness of selection among drafted processes. The choice of draft standard is crucial to the performance of this strategy and is determined at the system design stage. After calculating the draft-standard, a draft-select message is sent to the drafted processor that that has the highest draft-age. draft-age. The draft drafting ing proces processor sor will send the draft-select message, only if it is still on the LLoad state, otherwise it will not accept any migrating processes.. processes
f (n). (n). Furthermore, it has been shown in [6] that f (n) (n) = n div 3 yields the best results and that the greedy strategy outperforms the threshold strategy with T=1 in all experiments.
The greedy strategy adopts a cyclic probing mechanism instead of the random selection used in the threshold strategy. In this cyclic probing mechanism, processor i probes processor (i+j) mod N, N representing the number of processors in the th system, in the j probe to locate a suitable destination processor.. For example, processor example, in a system with 5 processors processors numbered 0,1,2,3, and 4 respectively, Processor 1 will first probe processor 2. If this attempt is not successful, successful, it will probe 3 and so on. As in the threshold strategy, once a task is transferred to a remote processor it must be executed there [6]. Despite the similarities between the two strategies, it has been demonstrated using simulation results in [6] that the greedy strategy outperforms the threshold strategy. This improvement is attributed to the fact that the greedy strategy attempts to
Research has shown in [8] that the drafting strategy alleviates many drawbacks encountered in the bidding algorithm such as unnecessary communication messages and the possibility of having a bid winner processor being overloaded. Simulation results and detailed comparison results are reported in [8].
10
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010
information exchanged, which has a negative reflection on the performance perform ance of an algorithm. algorithm.
transfer every task that arrives at a busy processor whereas the threshold strategy attempts to transfer only when a task arrives at a processor which has reached the threshold T or higher.
The semi-distributed policy comes in the middle between centralized and distributed policies. It is introduced to take the best of each and to avoid the major drawbacks of each of the two policies. The semi-distributed strategy is based on the partitioning partiti oning of the processors processors into equal sized sets. Each set adopts a centralized policy where a central processor takes charge of load balancing within its set. The sets together adopt a distributed policy where each central processor of each set exchanges information with other central processors of other sets to achieve a global load balance.
III. R ESPONSIBILITY ESPONSIBILITY OF LOAD BALANCING Along with various load balancing strategies which may be applied independently or tailored to enhance the performance of an algorithm for solving a certain problem, different policies of where to put the control of the load balancing algorithm have been proposed in the literature: centralized, distributed, or semi-distributed. A centralized load balancing strategy assigns a single processor the responsibility responsibility of initiat initiating ing and monitor monitoring ing the load balance operation. In this strategy, a dedicated processor gathers the global information about the state of the system and assigns tasks to individual processors. Despite its high potentiall of achieving optimal performa potentia performance, nce, centralized centralized strategies have some disadvantages: high vulnerability to failures, storage requirements for maintaining the state information - especially for large systems, and the dependability of the performance of the system on the central processor which co could uld resul resultt in a bottleneck bottleneck [9].
It has been shown in [9] that the semi-distributed policy produces a better perform performance ance than the central centralized ized and distributed policies. Research demonstrates that each central processor yields optimal load balance locally within its set. Moreover, this policy does not incur high communication overhead while gathering system state information. Although this policy is a mediator between the centralized and the distributed ones, it fits large distributed systems better than small systems. IV. DIFFERENT OBJECTIVES OF LOAD BALANCING STRATEGIES
In a distributed load balancing strategy, each processor executes the same algorithm and exchanges information with other processors about the state of the system. Each processor may send or receive work on the basis of a sender-initiated or a receiver-initiated policy. In a sender-initiated policy, the sender decides which job gets sent to which receiver. In a receiver-initiated policy, the receiver searches for more work to do. Intuitively, queues are formed at senders if a receiverinitiative policy is used, while they are formed at receivers if a sender-initiative policy is used. Additionally, scheduling decisions are made when a new job arrives at the sender in a sender-initiative, while they are made at the departure of a job in a receiver-initiative policy. The determination of which policy is adopted adopted depends upon the load transfer transfer request which can be initiated by an over-loaded or under-loaded processor. Many distributed strategies belong to either of the two policies. For instance, of the strategies discussed earlier in Sec II, the bidding strategy strategy belongs tto o the sender-initiated sender-initiated poli policy, cy, whereas the drafting strategy belongs to the receiver-initiated policy [4], [5], [8], and [9].
Different load balancing strategies have different objectives and yield various solutions. Some solutions are optimal or suboptimal. This section highlights the features of each solution and its relationship with a load balancing policy. Optimal solutions can be obtained only if complete information regarding the state of the system, as well as the resource needs of a process, is known. An optimal load balance strategy makes optimal assignmen assignments ts based on some criteria function. Examples of optimizing measures are minimizing process completion time, maximizing system throughput [7]. Static load balancing strategies have a higher potential for yielding optimal solutions than dynamic ones. In some situations, however, producing an optimal solution is computationally infeasible. In this case, suboptimal solutions may be targeted. Suboptimal solution could be either approximate or heuristic [7].
It has been demonstrated in [4], [5], [18], using analytical models and simulations, that sender-initiated strategies generally perform better at lower system loads while receiverinitiated strategies perform better at higher system loads, assuming that process migration cost under the two strategies is comparable. Some of the advantages offered by the distributed policy are: Fault toleranc tolerance, e, minimum storage requirem requirements ents to keep status information, and the availability of system state information at all nodes. The distributed policy still has some disadvantage, one of which is that optimal scheduling decisions are difficult to make because of the rapidly changing environment introduced by the arrivals and departures from individual processors. Another disadvantage is the extra communication overhead is introduced by all processors trying
An approximate solution uses the same algorithm of producing an optimal solution, but instead of searching the entire solution space, it limits itself to producing a "good" solution in less time rather than a perfect solution. Finding a good approximate solution depends on the availability of a function to evaluate a solution, the time required to evaluate a solution, the ability to judge according to some measurement criteria, the value of an optimal solution, and the availability of mechanism for intelligently reducing the solution space [7].
to gather information about each other. To mitigate this overhead, some distributed strategies minimize the amount of
perform performance ance indirectly indirect ly and that monitor them. heavily For instance, instance clustering groups of processes communicate within,
Heuristic load balancing strategies use static algorithms which make the most realistic assumption regarding the information available about process and system load. Heuristic solutions try to identify parameters that affect system
11
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010 [7] T. L. Casavant, "A Taxonomy of Scheduling in General-Purpose one processor would enhance system performance. Although Distributed Computing Systems," IEEE Trans. Software Eng., vol 14, no. 2, pp 141-154, February 1988.
this act affects system performance directly by decreasing the overhead in passing information, it cannot be directly related in a quantitative way to system performance as seen by the user [7]. Four techniques of task allocation algorithms are usually used by static load balancing strategies whether it is trying to produce optimal or approxim approximate ate solution: solutio solution n space enumeration and search, graph theoretic, mathematical programming, programm ing, or queuing theoretic theoretic [7]. V.
[8] L. M. Ni, C. Xu, and T. B. Gendreau, "A Distributed Drafting Algorithm for Load Balancing," IEEE Trans. Software Eng., vol.SE-11, no. 10, pp. 1153-1161, October 1985. [9] I. Ahmed and A. Ghafoor, "Semi-Distributed Load Balancing for Massively Parallel Multicomputers," IEEE Trans. Software Eng., vol. 17, no. 10, pp 987-1004, October 1991. [10] K. Ramamritham, J. A. Stankovic, and W. Zhao, "Distributed Scheduling of Tasks with Deadlines and Resource Requirements," IEEE Trans. Comput., vol. 38, no. 8, pp 1110-1123, August 1989.
LOAD BALANCING ALGORITHM MODELING AND A NALYSIS
[11] J. A. Stankovic, K. Ramamritham, and S. Cheng, "Evaluation of a Flexible Task Scheduling Algorithm for Distributed Hard Real-Time Systems," IEEE Trans. Comput., vol. C-34, no. 12, pp. 1130-1143, December 1985.
Before an algorithm is implemented, it is usually analyzed to anticipate the worth of its effectiveness were it to be implemented. Relating to load balancing algorithms, analytical modeling and simulation were the dominating techniques in the literature. They were used extensively to demonstrate and compare various strategies. Queuing theory in particular was used in both analytical and simulation modeling. Analytical modeling could be targeted to analyze the system performance in steady-state or in non-steady-state. Steady-state analysis is based on birth and death Markovian processes while nonsteady-state would be based on how the system would perform in the presence of partial failure, i.e., system fault-tolerance is analyzed. Simulation modeling modeling could be either discrete-event or continuous event. In case of discrete-event simulation, simulation languages such as SLAM, SIMAN, GPSS, SIMSCRIPT, etc., or any of high level programming languages such as C or MATLAB, are used to model the system. In the case of continuous modeling, differential and integral equations techniques are used. Steady-state based analysis and discreteevent simulation was heavily used to analyze and model load balance algorithm algorithmss in the literature. literature.
[12] A. N. Tantawi and D. Tawsley, "Optimal Static Load Balancing in Distributed Computer Systems," J. of Assoc. Comput., vol. 32, no. 2, pp. 445-465, April 1985. [13] S. H. Bokhari, "Dual Processor Scheduling with Dynamic Reassignment," IEEE Trans. Software Eng., vol. SE-5, no. 4, pp. 341439, July 1979. [14] C. Kim and H. Kameda, "An Algorithm for Optimal Static Load Balancing in Distributed Computer Systems," IEEE Trans. Comput., vol. 41, no. 3, pp. 381-384, March 1992. [15] S. Penmasta and A. T. Ch Chronopoulos, ronopoulos, "Dynamic Multi-User Load Balancing in Distributed Systems", 2007 IEEE International Parallel and Distributed Processing Symposium, pp. 1-10, Long Beach, CA, USA, March 2007. [16] A. Karimi, F. Zarafshan, A. b. Jantan, A. R. Ramli and M. I. Saripan, "A New Fuzzy Approach for Dynamic Load Balancing Algorithm," International Journal of Computer Science and Informaiont Security," vol. 6 no. 1, pp. 001-005 , October 2009. [17] C.C. Hui and S. T. Chanson, "Improved Strategies for Dynamic Load Balancing," IEEE Concurrency, vol. 7, no. 3, pp. 58-67, July-Sept., 1999. [18] D. L. Eager and E. D. Lazowski, and J. Zahorjan, "A Comparision of Receiver-Inititated and Sender Initiated Adaptive Load Sharing," Performance Evaluation, 6, pp. 53-68, March, 1986. [19] J. A. Bannister and K. S. Trivedi, "Task Allocation in Fault-Tolerant Distributed Systems," Acta Inform., vol. 20, pp. 261-281, 1983.
VI. CONCLUSION This paper has attempted to present the most recent ideas and achievements realized in load balancing in distributed systems. The intention has been to provide a suitable understanding of the problem probl em and different different approaches approaches that researche researchers rs have employed to solve it. Specific lload oad balancing strategies strategies w were ere presented prese nted to give give an idea of whe where re the res research earch iiss headed headed in this
[20] F. Berman and L.Syder, "On Mapping Parallel Algorithms into Parallel Architectures," in 1984 Int. Conf. Parallel Proc., pp. 307-309, August 1984. [21] X. Tang and S.T. Chanson, "Optimizing Static Job Scheduling in a Network of Hetrogenous Computers," Proc. of the Intl. Conf. on Parallel Processing, pp. 373-382, August 2000.
[22] K. Efe, "Heuristic Models of Task Scheduling Systems," Computer, vol. 15, no. 6,Assignment pp. 50-56, June 1982. in Distributed
field, rather than to elect them over others.
[23] G. R. Andrews, D. P. Dobkin, and P.J. Doweny, "Distributed Allocation with Pools of Servers," in ACM SIGACT-SIOPS Symp. Principles of Distributed Computing, pp. 73-83, August 1982.
EFERENCES R EFERENCES [1] H. S. Stone, High-Performance Computer Architecture, 2nd ed. Addison Wesley, Reading, MA, 1990.
[24] R. M. Bryant and R. A. Finkel, "A Stable Distributed Scheduling Algorithm," in Proc. 2nd Int. Conf. Dist. Comp., pp. 341-323, April 1981.
[2] S. Dhakal, M. M. Hayat, J.E.Pezoa, C. Yang, and D. Bader, "Dyanmic Load Balancing in Distributed System in the Presence of Delays: A Regeneration-Therory Approach,", IEEE Transactions on Parallel and Distributed Systems, vol. 18, no. 4, April 2007.
[25] T. L. Casavant and J. G. Kuhl, "Design of a Loosely-Coupled Distributed Mutiprocessing Network," in 1984 Int. Conf. Parallel Proc., pp. 42-45, August August 1984.
[3] L. M. Campos and I. Scherson, "Rate of Change Load Balancing in Distirbuted and Parallel Systesms, " Parallel Computing, vol. 26 no. 9, pp. 1213-1230, 1213-1230, July 2000.
[26] L.M. Ni and K Abani, "Nonpreemptive Load Balancing in A Class of Local Area Networks,", in Proc. Comp. Networking Symp., pp. 113-118, December 1981.
[4] Y. Wang and R. Morris, "Load Sharing in Distributed Systems," IEEE Trans. Comput., vol. C-34, no. 3, pp. 204-217, Mar. 1985.
[27] J. A. Stankovic and I. S. Sidhu, "An Adaptive Bidding Algorithm for Processes, Cluster and Distributed Groups," in Proc. 4th Int. Conf. Distributed Compu. Sys., pp. 49-59, 1984.
[5] D.L. Eager, E.D. Lazowski, and J. Zahorjan, "Adaptive Load Sharing in Homogeneous Distributed Systems," IEEE Trans. Software Eng., vol. SE-12, no. 5, pp. 662-675, May 1986.
[28] D. Grosu and A. T. Chronopoulos," Noncooperative Load Balancing in
[6] S. Chowdhury, "The Greedy Load Sharing Algorithms," J. Parallel and Distributed Comput., vol. 9, pp. 93-99, May 1990.
Distributed Systems," JournalSept. of Parallel vol. 65, no. 9, pp. 1022-1034, 2005. and Distributed Computing,
12
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8 , No. 4 , 2010
[29] Z. Zeng and B. Veeravalli, "Rate-based and Queue-based Dynamic Load Balancing Algorithms in Distributed Systems," Proc. of 10th Int. Conf on Prallel and Distributed Systems, pp. 349-356, July 2004.
1996, his M.S. degree in computer science from University of Western Michigan, Kalamazoo, USA in Dec. 1992 and his B.Sc. degree in computer science from King Saud University, Riyadh, Saudi Arabia in Dec. 1987. He is currently working as an Assistant Professor at the department of Information Technology, College of Computing and Information Technology, University of Tabuk, Saudi Arabia. His current research interests include automated software testing, distributed computing, cellular net works, and fuzzy logic.
[30] Z. Khan, R. Singh, J. Alam, and R. Kumar, "Perforamnce Analysis of Dynamic Load Balancing Techniques for Parallel and Distributed Systems," International Journal of Computer and Netwrok Security, vol. 2, no. 2, February 2010. Ali M. Alakeel (also known as Ali M. Al-Yami) obtained his PhD degree in computer science from Illinois Institute of Technology, Chicago, USA in Dec.
13
http://sites.google.com/site/ijcsis/ ISSN 1947-5500