Description
Abstract
This thesis presents research and study ofload balancing algorithms and the analysis of the performance of each algorithm in varying conditions. The research also covers a study of the characteristics of Internet traffic and its statistical properties. The network workload models that were implemented in the simulation program were derived from the many works already published within the Internet community. These workload models were successfully implemented and statistical proof is given that they exhibit characteristics similar to the workloads found on the Internet. Finally, this thesis compares and contrasts the differences between stateless server selection methods and state-base selection methods with the different algorithms studied.
INTRODUCTION
The research and results of the topic of Load Balancing and Internet traffic modeling are presented in this thesis.
Load Balancing is a form of system performance evaluation, analysis and optimization, which attempts to distribute a number of logical processes across a network of processing elements. There have been many algorithms and techniques that have been developed and studied for improving system performance. In early research in the field of computer science, the main focus for improving performance was to develop algorithms and techniques to optimize the use of systems with limited and expensive resources for scientific computing and information systems. Later, there was an emphasis on how to network groups of computers or workstations and then share the resources among workgroups. More recently there has been a tremendous increase in the popularity of the Internet as a system for sharing and gathering information. The use of the Internet has been increasing at a tremendous rate and there always has been a concern among those in the Internet community that enough resources will be available to provide the expected quality of service that is received by its users.
The process of balancing, sharing, scheduling or distributing work using a network of computers or a system of multiple processing elements is a widely studied sub
ject. This paper will focus on recent works that have been written regarding todays computing environments.
In this thesis, we will look at the art of load balancing and how it can be applied to distributed networks and more specifically the Internet. Initially, the focus of the research for this thesis was on development and analysis of algorithms that minimized the amount of messaging or probing that is required for determining the current workload of a set of processing elements, such as a group of replicated web servers. These algorithms were to compare stochastic based methods of estimating server workloads, with more intrusive methods of messaging and probing. During the process of developing the simulator to be used for evaluating the algorithms in this study, tasks related to modeling network workloads, and the workloads related to the Internet in particular, were identified to be crucial to the research in this area and as a result of this effort, network modeling has become a significant portion of this thesis.
The thesis is divided into the following chapters. Chapter 2 will discuss the concepts and research related to the subject of Load Balancing. Chapter 3 will discuss the issues related to modeling network traffic and in particular Internet traffic. Chapter 4 describes the Network Simulator used to evaluate the load balancing algorithms. Chapter 5 is the Experimental Design of the network simulations, and Chapter 6 will discuss the results of the experimental simulations. Finally, Chapter 7 will discuss the conclusions based upon the experiments performed in this study.
Finally, in addition to the goals already mentioned, it is the hope of the author that this research can be used as a reference for the continued study in the areas of network performance evaluation, modeling and simulation.
LOAD BALANCING
Load balancing is probably the most commonly used term for describing a class of processes that attempt to optimize system performance. System performance is optimized by attempting to best utilize a group of processing elements, typically a CPU, or storage elements, such as memory of disk, or some other resource that are interconnected in a distributed network. The process of Load Balancing may also be known as Load Sharing, Load Distribution, Parallel Programming, Concurrent Programming, and Control Scheduling. Although these processes can be quite different in their purpose, the processes all have a common goal. This goal is to allocate logical processes evenly across multiple processors, or a distributed network of processing elements, so that collectively all the logical processes are executed in the most efficient manner possible. Examples of some of the approaches for achieving this goal are: keeping idle systems busy, low execution latency fast execution time, fast response time, maximizing job throughput, executing jobs in parallel and distribution of jobs to specialized systems. More generally, whenever a processing element becomes idle and there are logical processes waiting for servers, the system should attempt to place any new process or processes waiting for service on an idle server and not on a busy one.
One area that has received a lot of research is the dynamic load balancing of processes by migrating processes from busy servers to less busy servers. The issues of
dynamic load balancing along with the basic load balancing concepts will be addressed in this chapter.
The rest of this chapter is divided into the following sections, the Types of Load Balancing, Selection Methods, Related Works and finally, Applications of Load Balancing.
2.1Types of Load Balancing
There are basically two types of load balancing, static load balancing and dynamic load balancing. The following two sections will discuss these topics.
2.1.1Static Load Balancing
Static load balancing is the simplest form of the two types. Static load balancing is the selection and placement of a logical process on some processing element located on a distributed network. The selection of the processing element for some logical process is based upon some weighting factor for that process, or some kind of workload characterization of the processing elements or of the network topology, possibly both. The process of selecting a processing element for a logical process requesting service may be done with two possible methods, a stateless method or a statebased method. With a stateless method the selection of a processing element is done without regard to any knowledge of the system state. A statebased method of selecting a processing element implies that the selection of a processing element requires knowledge of the system state, either globally, or locally. Global knowledge implies that the state of the all the components of the system is known, and local knowledge implies that only partial knowledge is known. If the state
of the system is needed then some kind of messaging or probing of network resources among the requesting processes, agents, or processing elements is needed to determine their availability. Once a processing element is selected for a logical process, the logical process is executed on the selected processing element for the duration of the logical process lifetime.
Two examples of stateless placement techniques for selecting a processing element for logical processes are round robin and random placement. Round robin placement selects the next processing element from a predefined list. Random placement selects an element randomly from a set of processing elements.
Examples of statebased process placement techniques include greedy algorithms and stochastic selection processes. Greedy processes typically try to find the processing element with the lightest load or the best response time, therefore requiring the process to have some knowledge of the workload of each processing element or the state of the network topology between the client and server. Another example is the selection of a processing element using a randomly selected subset of a set of processing elements and then selecting the processor in the subset with the lowest load. Studies by Mitzenmacher 1997 and Dahlin 1998 researched the random subset selection technique in great detail. Finally, stochastic selection techniques may be used to select a processing element based upon the probability distribution of the server loads. This technique may select any server, but the lighter the load the higher the probability of selecting the server. These techniques are described in detail in Chapter 4.
2.1.2Dynamic Load Balancing
Dynamic load balancing is the initial selection and placement of a logical process on some processing element in a distributed network and then at some point in time there may be a decision made to move a process to some other processing element. The initial selection and placement is done with the same method as static load balancing. But at some point in time during the execution of the logical process, based on some decision criteria, a process may be preempted and migrated to another processing element somewhere else on the network. Generally a process is migrated to another processor if the migration cost or overhead is less than some predetermined metric.
This raises a number of issues regarding dynamic load balancing when the system chooses to move a process. Some of these issues are:
Which process or processes are candidates for migration
Which processing element is the best target for process migration
How do you preserve the process state when it is migrated
Is the system homogeneous or does it have heterogeneous systems
What is the overhead cost of migration
When is the decision for migration made
Who makes the decision to migrate; the system, server or process
Reviews
There are no reviews yet.