\part{NUMERICAL METHODS}
\chapter[ Competitive Analysis]{Competitive Analysis}
In\index{competive analysis|(} on-line computation an algorithm must make its decisions based
only on past events without secure information on future. Such
methods are called \ki{on-line algorithms}.\inddef{on-line
algorithm} On-line algorithms have many applications in different
areas such as computer science, economics and operations research.
The first results in this area appeared around 1970, and later
since 1990 more and more researchers have started to work on
problems related to on-line algorithms. Many subfields have been
developed and investigated. Nowadays new results of the area have
been presented on the most important conferences about algorithms.
This chapter does not give a detailed overview about the results,
because it is not possible in this framework. The goal of the
chapter is to show some of the main methods of analysing and
developing on-line algorithms by presenting some subareas in more
details.
In the next section we define the basic notions used in the
analysis of on-line algorithms. After giving the most important
definitions we present one of the best-known on-line problems---the on-line
$k$-server problem---and some of the related results.
Then we deal with a new area by presenting on-line problems
belonging to computer networks. In the next section the on-line
bin packing problem and its multidimensional generalisations are
presented. Finally in the last section of this chapter we show
some basic results concerning the area of on-line
scheduling.
\section{Notions, definitions}
Since an on-line algorithm makes its decisions based on partial
information without knowing the whole instance in advance, we
cannot expect it to give the optimal solution which can be given
by an algorithm having full information. An algorithm which knows
the whole input in advance is called \ki{off-line
algorithm.}\inddef{off-line algorithm}
There are two main methods to measure the performance of on-line
algorithms. One possibility is to use \ki{average case
analysis}\inddef{average case analysis} where we hypothesise some
distribution on events and we study the expected total cost.
The disadvantage of this approach is that usually we do not have
any information about the distribution of the possible inputs. In
this chapter we do not use the average case analysis.
An another approach is a worst case analysis, which is called
\ki{competitive analysis.}\inddef{competitive analysis} In this
case we compare the objective function value of the solution
produced by the on-line algorithm to the optimal off-line
objective function value.
In case of on-line minimisation problems an on-line algorithm is
called \ki{$C$-competitive,}\inddef{C-competitive
@$C$-competitive} if the cost of the solution produced by the
on-line algorithm is at most $C$ times more than the optimal
off-line cost for each input. The \ki{competitive ratio of an
algorithm} \inddef{competitive ratio} is the smallest $C$ for
which the algorithm is $C$-competitive.
For an arbitrary on-line algorithm \textsc{ALG} we denote the
objective function value achieved on input $I$ by
\textsc{ALG}$(I).$ The optimal off-line objective function value
on $I$ is denoted by \textsc{OPT}$(I)$. Using this notation we can
define the competitiveness as follows.
Algorithm \textsc{ALG} is $C$-competitive, if \textsc{ALG}$(I)
\le C \cdot \mbox{\textsc{OPT}}(I)$ is valid for each input $I$.
There are two further versions of the competitiveness which are often used. For a
minimisation problem an algorithm \textsc{ALG} is called
\ki{weakly $C$-competitive,}\inddef{weakly $C$-competitive} if
there exists such a constant $B$ that \textsc{ALG}$(I) \le C \cdot
\mbox{\textsc{OPT}}(I)+B$ is valid for each input $I$.
The \ki{weak competitive ratio of an algorithm}\inddef{weak
competitive ratio} is the smallest $C$ for which the algorithm is weakly $C$-competitive.
A further version of the competitive ratio is the asymptotic
competitive ratio. For minimisation problems the \ki{asymptotic
competitive ratio}\inddef{asymptotic competitive ratio} of
algorithm \textsc{ALG} ($R_{\mbox{\textsc{ALG}}}^{\infty}$) can be defined as follows:
$$R_{\mbox{\textsc{ALG}}}^{n}= {\rm sup} \left\{ \frac{\mbox{\textsc{ALG}}(I)}{\mbox{\textsc{OPT}}(I)} \hskip 0.2cm
\vert \hskip 0.2cm \mbox{\textsc{OPT}}(I)=n \right\} \koz,$$
$$R_{\mbox{\textsc{ALG}}}^{\infty}= \limsup_{n \to \infty} R_{\mbox{\textsc{ALG}}}^n \koz.$$
An algorithm is called \ki{asymptotically $C$-competitive}
\inddef{asymptotically $C$-competitive} if its asymptotic
competitive ratio is at most $C$.
The main property of the asymptotic ratio is that it considers the
performance of the algorithm under the assumption that the size of
the input tends to $\infty$. This means that this ratio is not
effected by the behaviour of the algorithm on the small size
inputs.
Similar definitions can be given for
maximisation problems. In that case algorithm \textsc{ALG} is called
$C$-competitive, if \textsc{ALG}$(I) \ge C \cdot \mbox{\textsc{OPT}}(I)$
is valid for each input $I$, and the algorithm is weakly
$C$-competitive if there exists such a constant $B$ that
\textsc{ALG}$(I) \ge C \cdot \mbox{\textsc{OPT}}(I)+B$ is valid for each
input $I$. The asymptotic ratio for maximisation problems can be
given as follows:
$$R_{\mbox{\textsc{ALG}}}^{n}= {\rm inf} \left\{ \frac{\mbox{\textsc{ALG}}(I)}{\mbox{\textsc{OPT}}(I)} \hskip 0.2cm
\vert \hskip 0.2cm \mbox{\textsc{OPT}}(I)=n \right\} \koz,$$
$$R_{\mbox{\textsc{ALG}}}^{\infty}= \liminf_{n \to \infty} R_{\mbox{\textsc{ALG}}}^n \koz.$$
The algorithm is called \ki{asymptotically $C$-competitive}
if its asymptotic ratio is at least $C$.
Many papers are devoted \ki{randomised on-line
algorithms,}\inddef{randomised on-line algorithm} in which case
the objective function value achieved by the algorithm is a random
variable, and the expected value of this variable is used in the
definition of the competitive ratio. Since we consider only
deterministic on-line algorithms in this chapter, we do not detail
the notions related to randomised on-line algorithms.
\section{The $k$-server problem}
One of the best-known on-line problems is the on-line $k$-server
problem. To give the definition of the general problem the
notion of metric spaces is needed. A pair $(M,d)$ (where $M$ contains
the points of the space and $d$ is the distance function defined on
the set $M\times M$) is called metric space if the following
properties are valid:
\begin{itemize}
\item $d(x,y) \ge 0$ for all $x,y \in M$,
\item $d(x,y)=d(y,x)$ for all $x,y \in M$,
\item $d(x,y)+d(y,z)\ge d(x,z)$ for all $x,y,z \in M$,
\item $d(x,y)=0$ holds if and only if $x=y$.
\end{itemize}
In the $k$-server problem a metric space is given, and there are
$k$ servers which can move in the space. The decision maker has to
satisfy a list of requests appearing at the points of the metric
space by sending a server to the point where the request appears.
The problem is on-line which means that the requests arrive one by
one, and we must satisfy each request without any information
about the further requests. The goal is to minimise the total
distance travelled by the servers. In the remaining parts of the
section the multiset which contains the points where the servers
are is called the \ki{configuration of the servers.}
\inddef{configuration of the servers} We use multisets, since
different servers can be at the same points of the space.
The first important results for the $k$-server problem were
achieved by Manasse,\nevindex{Manasse, Mark}
McGeoch\nevindex{McGeoch, Lyle} and Sleator\nevindex{Sleator,
Daniel}. They developed the following algorithm called
\textsc{Balance}, which we denote by \textsc{BAL}. During the
procedure the servers are in different points. The algorithm
stores for each server the total distance travelled by the
server. The servers and the points in the space
where the servers are located are denoted by $s_1,\dots,s_k$. Let the
total distance travelled by the server $s_i$ be $D_i$. After
the arrival of a request at point $P$ algorithm \textsc{BAL} uses
server $i$ for which the value $D_i+d(s_i,P)$ is minimal. This
means that the algorithm tries to balance the distances travelled
by the servers. Therefore the algorithm maintains server configuration
$S=\{s_1,\dots,s_k\}$ and the distances
travelled by the servers which distances have starting values
$D_1=\dots =D_k=0$. The behaviour of the algorithm on input
$I=P_1,\dots,P_n$ can be given by the following pseudocode:
\begin{alg}{BAL$(I)$}\inddef{\textsc{Balance}}\index{BAL|see{\textsc{Balance}}}
1 \' \key{for} \= $j \leftarrow 1$ \key{to} $n$ \\
2 \' \> \key{do} \= $i \leftarrow {\rm argmin} \{D_i+d(s_i,P_j)\}$ \\
3 \' \> \> serve the request with server $i$ \\
4 \' \> \> $D_i \leftarrow D_i+d(s_i,P_j)$ \\
5 \' \> \> $s_i \leftarrow P_j$
\end{alg}
\begin{pelda}
Consider the two dimensional Euclidean space as the metric space.
The points are two dimensional real vectors $(x,y)$, and the
distance between $(a,b)$ and $(c,d)$ is $\sqrt{(a-c)^2+(b-d)^2}$.
Suppose that there are two servers which are located at points
$(0,0)$ and $(1,1)$ at the beginning. Therefore at the beginning
$D_1=D_2=0$, $s_1=(0,0)$, $s_2=(1,1)$. Suppose that the first
request appears at point $(1,4)$. Then
$D_1+d((0,0),(1,4))=\sqrt{17}>D_2+d((1,1),(1,4))=3$, thus the
second server is used to satisfy the request and after the
action of the server $D_1=0, D_2=3$, $s_1=(0,0)$, $s_2=(1,4)$.
Suppose that the second request appears at point $(2,4)$, so
$D_1+d((0,0),(2,4))=\sqrt{20}>D_2+d((1,4),(2,4))=3+1=4$, thus
again the second server is used, and after serving the request
$D_1=0, D_2=4$, $s_1=(0,0)$, $s_2=(2,4)$. Suppose that the third
request appears at point $(1,4)$, so
$D_1+d((0,0),(1,4))=\sqrt{17} \key{do} $i \leftarrow {\rm argmin}_l d(P_j,s_l)$ \\
3 \' \key{if} \= $s_i= \min_l{s_l}$ or $s_i= \max_l{s_l}$ \\
4 \' \> \key{then} \= \` $\rhd$ the request is served by the $i$-th server \\
5 \' \> \> $s_i \leftarrow P_j$ \\
6 \' \> \key{else} \key{if} \= $s_i \le P_j$ \\
7 \' \> \> \key{then} \= $m \leftarrow {\rm argmin}_{l: s_l > P_j} d(s_l,P_j)$ \\
8 \' \> \> \> \` $\rhd$ the request is served by the $i$-th server \\
9 \' \> \> \> $s_m \leftarrow s_m- d(s_i,P_j)$ \\
10 \' \> \> \> $s_i \leftarrow P_j$ \\
11 \' \> \> \key{else} \> \key{if} \= $s_i \ge P_j$ \\
12 \' \> \> \> \> \key{then} \= $r \leftarrow {\rm argmin}_{l: s_l < P_j} d(s_l,P_j)$ \\
13 \' \` $\rhd$ the request is served by the $i$-th server \\
14 \' \> \> \> \> \> $s_r \leftarrow s_r + d(s_i,P_j)$ \\
15 \' \> \> \> \> \> $s_i \leftarrow P_j$
\end{algN}
\begin{pelda}
Suppose that there are three servers $s_1,s_2,s_3$ located at
points $0,1,2$. If the next request appears at point $4$, then
\textsc{DC} uses the closest server $s_3$ to serve the request.
The locations of the other servers remain unchanged, the cost of
serving the request is $2$ and the servers are at points
$0,1,4$. If the next request appears at point $2$, then \textsc{DC}
uses the closest server $s_2$ to serve the request, but there are
servers on the opposite side of the request, thus $s_3$ also travels
distance $1$ into the direction of $2$. Therefore the cost of
serving the request is $2$ and the servers will be at points
$0,2,3$.
\end{pelda}
The following statement, which can be proved by the potential
function technique, is valid for algorithm \textsc{DC}. This
technique is often used in the analysis of on-line algorithms.
\begin{tetel}\label{onls4}
Algorithm \textsc{DC} is weakly $k$-competitive on the line.
\end{tetel}
\begin{biz}
Consider an arbitrary sequence of requests and denote this input by
$I$. During the analysis of the procedure we suppose that one
off-line optimal algorithm and \textsc{DC} are running parallel
on the input. We also suppose that each request is served first by
the off-line algorithm and then by the on-line algorithm. The
servers of the on-line algorithm and also the positions of the
servers (which are real numbers) are denoted by $s_1,\dots, s_k$,
and the servers of the optimal off-line algorithm and also the
positions of the servers are denoted by $x_1,\dots, x_k$. We
suppose that for the positions $s_1 \le s_2 \le \dots \le s_k$ and
$x_1 \le x_2 \le \dots \le x_k$ are always valid, this can be
achieved by swapping the notations of the servers.
We prove the theorem by the potential function technique. The
potential function assigns a value to the actual positions of the
servers, so the on-line and off-line costs are compared using the
changes of the potential function. Let us define the following potential
function:
$$\Phi=k \sum_{i=1}^k \vert x_i- s_i \vert + \sum_{i \key{then} \= \key{while} \= there is not enough space for $g$ \\
3 \' \> \> \> \textbf{do} \= $\Delta \leftarrow \min _{f\in {LA}} cr(f)/s(f)$ \\
4 \' \> \> \> \> for each $f \in LA$ let $cr(f) \leftarrow cr(f) - \Delta \cdot s(f)$ \\
5 \' \> \> \> \> evict some pages with $cr(f)=0$ \\
6 \' \> \> \> \> place $g$ into cache $LA$ and let $cr(g) \leftarrow c(g)$ \\
7 \' \> \key{else} \> reset $cr(g)$ to any value between $cr(g)$ and $c(g)$
\end{alg}
\begin{pelda}
Suppose that $k=10$ and $LA$ contains the following three pages:
$g_1$ with $s(g_1)=2, cr(g_1)=1$, $g_2$ with $s(g_2)=4, cr(g_2)=3$
and $g_3$ with $s(g_3)=3, cr(g_3)=3$. Suppose that the next
requested page is $g_4$, with parameters $s(g_4)=4$ and $c(g_4)=4$.
Therefore, there is not enough space for it in the cache, so some pages
must be evicted. \textsc{Landlord} determines the value
$\Delta=1/2$ and changes the credits as follows: $cr(g_1)=0,
cr(g_2)=1$ and $cr(g_3)=3/2$, thus $g_1$ is evicted from cache
$LA$. There is still not enough space for $g_4$ in the cache. The
new $\Delta$ value is $\Delta=1/4$ and the new credits are:
$cr(g_2)=0, cr(g_3)=3/4$, thus $g_2$ is evicted from the cache.
Then there is enough space for $g_4$, thus it is placed into
cache $LA$ with the credit value $cr(g_4)=4$.
\end{pelda}
\textsc{Landlord} is weakly $k$-competitive, but a stronger
statement is also true. For the web caching problem an on-line
algorithm \textsc{ALG} is called $(C,k,h)$-competitive, if
\inddef{C-kh@$C$-$(k,h)$-competitive} there exists such
a constant $B$, that $\textsc{ALG}_k(I) \le C \cdot
\mbox{\textsc{OPT}}_h(I)+B$ is valid for each input, where
\textsc{ALG}$_k(I)$ is the cost of \textsc{ALG} using a cache of
size $k$ and \textsc{OPT}$_h(I)$ is the optimal off-line cost
using a cache of size $h$. The following statement holds for
algorithm \textsc{Landlord}.
\begin{tetel}\label{onln3}
If $h\le k,$ then algorithm \textsc{Landlord} is
$(k/(k-h+1),k,h)$-competitive.
\end{tetel}
\begin{biz}
Consider an arbitrary input sequence of pages and denote the input by
$I$. We use the potential function technique. During the
analysis of the procedure we suppose that an off-line optimal
algorithm with cache size $h$ and \textsc{Landlord} with cache
size $k$ are running parallel on the input. We also suppose that
each page is placed first into the
off-line cache by the off-line algorithm and then it is placed into
\textit{LA} by the on-line algorithm. We denote the set of the pages contained in the actual cache
of the optimal off-line algorithm by \textit{OPT.} Consider the following
potential function:
$$\Phi=(h-1)\sum_{f \in \textit{LA}} cr(f) + k \sum_{f \in \textit{OPT}} (c(f)-cr(f)) \koz.$$
The changes of the potential function during the different steps are as follows.
\begin{itemize}
\item \textsc{OPT} places $g$ into its cache.
In this case \textsc{OPT} has cost $c(g)$. In the potential function only
the second part may change. On the other hand, $cr(g)\ge 0$, thus
the increase of the potential function is at most $k \cdot c(g)$.
\item \textsc{Landlord} decreases the credit value for each $f \in
LA$.
In this case for each $f\in \textit{LA}$ the decrease of $cr(f)$ is $\Delta
\cdot s(f)$, thus the decrease of $\Phi$ is
$$\Delta((h-1)s(LA)-ks(\textit{OPT}\cap \textit{LA})) \koz,$$
\noindent where $s(\textit{LA})$ and $s(\textit{OPT}\cap \textit{LA})$ denote the total size
of the pages contained in sets \textit{LA} and $\textit{OPT}\cap \textit{LA}$,
respectively. At the time when this step is performed,
\textsc{OPT} have already placed page $g$ into its cache
$OPT$, but the page is not contained in cache \textit{LA}. Therefore
$s(\textit{OPT}\cap \textit{LA}) \le h- s(g)$. On the other hand, this step is
performed if there is not enough space for the page in \textit{LA} thus
$s(\textit{LA})>k-s(g)$, which yields $s(\textit{LA})\ge k-s(g)+1$, because the
sizes are positive integers. Therefore we obtain that the decrease
of $\Phi$ is at least
$$\Delta \big((h-1)(k-s(g)+1) - k(h-s(g))\big)\koz.$$
\noindent Since $s(g) \ge 1$ and $k\ge h$, this decrease is at least $\Delta((h-1)(k-1+1) - k(h-1))=0$.
\item \textsc{Landlord} evicts a page $f$ from cache $LA$.
Since \textsc{Landlord} only evicts pages having credit $0$, during this step $\Phi$ remains
unchanged.
\item \textsc{Landlord} places page $g$ into cache $LA$ and sets the value $cr(g)=c(g)$.
The cost of \textsc{Landlord} is $c(g)$. On the other hand,
$g$ was not contained in cache \textit{LA} before the performance of
this step, thus $cr(g)=0$ was valid. Furthermore, first
\textsc{OPT} places the page into its cache, thus $g\in \textit{OPT}$ is
also valid. Therefore the decrease of $\Phi$ is
$-(h-1)c(g)+kc(g)=(k-h+1)c(g)$.
\item \textsc{Landlord} resets the credit of a page $g \in \textit{LA}$ to a value between
$cr(g)$ and $c(g)$.
In this case $g\in \textit{OPT}$ is valid, since \textsc{OPT} places
page $g$ into its cache first. Value $cr(g)$ is not decreased
and $k>h-1$, thus $\Phi$ can not increase during this step.
\end{itemize}
Hence the potential function has the following properties..
\begin{itemize}
\item If \textsc{OPT} places a page into its cache, then the increase of the potential
function is at most $k$ times more than the cost of \textsc{OPT}.
\item If \textsc{Landlord} places a page into its cache, then the decrease of
$\Phi$ is $(k-h+1)$ times more than the cost of \textsc{Landlord}.
\item During the other steps $\Phi$ does not increase.
\end{itemize}
By the above properties we obtain that $\Phi_f - \Phi_0 \le k
\cdot \mbox{\textsc{OPT}}_h(I) -(k-h+1) \cdot \textsc{Landlord}_k(I)$,
where $\Phi_0$ and $\Phi_f$ are the starting and final values of
the potential function. The potential function is nonnegative,
thus we obtain that $(k-h+1) \cdot \textsc{Landlord}_k(I) \le k
\cdot \textsc{OPT}_h(I) + \Phi_0$, which proves that
\textsc{Landlord} is $(k/(k-h+1),k,h)$-competitive.
\end{biz}
\subsection{On-line routing}
In computer networks the congestion of the communication channels
decreases the speed of the communication and may cause loss of
information. Thus congestion control is one of the most important
problems in the area of computer networks. A related important
problem is the routing of the communication, where we have to
determine the path of the messages in the network. Since we have
no information about the further traffic of the network, thus
routing is an on-line problem. Here we present two on-line
optimisation models for the routing problem. \\
\noindent {\bf The mathematical model}\inddef{the mathematical
model of routing}
The network is given by a graph, each edge $e$ has a maximal
available bandwidth denoted by $u(e)$ and the number of edges is
denoted by $m$. The input is a sequence of requests, where the
$j$-th request is given by a vector $(s_j,t_j,r_j,d_j,b_j)$ which means that
to satisfy the request bandwidth $r_j$ must be reserved on a path
from $s_j$ to $t_j$ for time duration $d_j$ and the benefit of
serving a request is $b_j$. Hereafter, we assume that
$d_j=\infty$, and we omit the value of $d_j$ from the
requests. The problem is on-line which means that after the
arrival of a request the algorithm has to make the decisions
without any information about the further requests. We consider
the following two models.
\textit{Load balancing model:}\inddef{load balancing routing
model} In this model all requests must be satisfied. Our aim
is to minimise the maximum of the overload of the edges. The
overload is the ratio of the total bandwidth assigned to
the edge and the available bandwidth. Since each request is served,
thus the benefit is not significant in this model.
\textit{Throughput model:}\inddef{throughput routing model} In
this model the decision maker is allowed to reject some requests.
The sum of the bandwidths reserved on an edge can not be more
than the available bandwidth. The goal is to maximise the sum of
the benefits of the accepted requests. We investigate this model
in details. It is important to note that this is a maximisation
problem thus the notion of competitiveness is used in the form
defined for maximisation problems.
Below we define the exponential algorithm. We need the following
notations to define and analyse the algorithm. Let $P_i$ denote the
path which is assigned to the accepted request $i$. Let $A$ denote
the set of requests accepted by the on-line algorithm. In this case
$l_e(j)=\sum_{i\in A, i \key{then} \= reserve bandwidth $r_j$ on path $P_j$ \\
5 \' \> \key{else} \> reject the request
\end{alg}
\textit{Note.} If we modify this algorithm to accept each request,
then we obtain an exponential algorithm for the load balancing
model.
\begin{pelda}
Consider the network which contains vertices $A, \ B, \ C, \ D$
and edges $(A,B), \ (B,D), \ (A,C), \ (C,D)$, where the available
bandwidths of the edges are $u(A,B)=1$, $u(B,D)=3/2$, $u(A,C)=2$,
$u(C,D)=3/2$. Suppose that $\mu=10$ and that the reserved
bandwidths are: $3/4$ on path $(A,B,D)$, $5/4$ on path
$(A,C,D)$, $1/2$ on path $(B,D)$, $1/2$ on path $(A,C)$. The next
request $j$ is to reserve bandwidth $1/8$ on some path between $A$
and $D$. Therefore values $l_e(j)$ are:
$l_{(A,B)}(j)=(3/4):1=3/4$, $l_{(B,D)}(j)=(3/4+1/2):(3/2)=5/6$,
$l_{(A,C)}(j)=(5/4+1/2):2=7/8$, $l_{(C,D)}(j)=(5/4):(3/2)=5/6$.
There are two paths between $A$ and $D$ and the costs are:
$$C(A,B,D)=1/8 \cdot 10^{3/4} + 1/12 \cdot 10^{5/6}=1.269 \koz,$$
$$C(A,C,D)=1/16 \cdot 10^{7/8} + 1/12 \cdot 10^{5/6}=1.035 \koz.$$
The minimal cost belongs to path $(A,C,D).$ Therefore, if $2m
b_j=8 b_j \ge 1,035$, then the request is accepted and the
bandwidth is reserved on path $(A,C,D)$. Otherwise the request
is rejected.
\end{pelda}
To analyse the algorithm consider an arbitrary input sequence $I$.
Let $A$ denote the set of the requests accepted by \textsc{EXP},
and $A^*$ the set of the requests which are accepted by
\textsc{OPT} and rejected by \textsc{EXP}. Furthermore let
${P_j}^*$ denote the path reserved by \textsc{OPT} for each
request $j$ accepted by \textsc{OPT}. Define the value $l_e(v)=
\sum_{i\in A, e \in P_i} r_i/u(e)$ for each $e$, which value gives
the ratio of the reserved bandwidth and the available bandwidth
for $e$ at the end of the on-line algorithm. Furthermore, let
$c_e(v)=\mu^{l_e(v)}$ for each $e$.
Let $\mu=4mPB$, where $B$ is an upper bound on the benefits and
for each request and each edge the inequality
$$\frac{1}{P} \le \frac{r(j)}{u(e)} \le \frac{1}{\lg \mu}$$
\noindent is valid. In this case the following statements hold.
\begin{lemma} \label{onln5}
The solution given by algorithm \textsc{EXP} is feasible, i.e. the
sum of the reserved bandwidths is not more than the available
bandwidth for each edge.
\end{lemma}
\begin{biz}
We prove the statement by contradiction. Suppose that there is an
edge $f$ where the available bandwidth is violated. Let $j$ be the
first accepted request which violates the available bandwidth on
$f$.
The inequality $r_j/u(f) \le 1/ \lg \mu$ is valid for $j$ and $f$
(it is valid for all edges and requests). Furthermore, after the
acceptance of request $j$ the sum of the bandwidths is greater
than the available bandwidth on edge $f$, thus we obtain that
$l_f(j)> 1-1/\lg \mu$. On the other hand, this yields that the
inequality
$$C(P_j)= \sum_{e \in P_j} \frac{r_j}{u(e)} c_e(j) \ge \frac{r_j}{u(f)}
c_f(j)> \frac{r_j}{u(f)} \mu^{1-1/\lg \mu}$$
\noindent holds for value $C(P_j)$ used in algorithm
\textsc{EXP}. Using the assumption on $P$ we obtain that
$\frac{r_j}{u(e)}\ge \frac{1}{P}$, and $\mu^{1-1/\lg \mu}=\mu/2$,
thus from the above inequality we obtain that
$$C(P) > \frac{1}{P}\frac{\mu}{2}=2mB \koz .$$
\noindent On the other hand, this inequality is a contradiction,
since \textsc{EXP} would reject the request. Therefore we obtained
a contradiction thus we proved the statement of the lemma.
\end{biz}
\begin{lemma} \label{onln6}
For the solution given by \textsc{OPT} the following inequality
holds:
$$ \sum_{j\in A^*} b_j \le \frac{1}{2m} \sum_{e \in E} c_e(v) \koz.$$
\end{lemma}
\begin{biz}
Since \textsc{EXP} rejected each $j\in A^*$, thus $b_j <
\frac{1}{2m}\sum_{e \in {P_j}^*} \frac{r_j}{u(e)} c_e(j)$ for each
$j\in A^*$, and this inequality is valid for all paths between
$s_j$ and $t_j$. Therefore
$$
\sum_{j\in A^*} b_j < \frac{1}{2m} \sum_{j \in A^*}
\sum_{e \in {P_j}^*} \frac{r_j}{u(e)} c_e(j) \koz.$$
\noindent On the other hand, $c_e(j) \le c_e(v)$ holds for each
$e$, thus we obtain that
$$\sum_{j\in A^*} b_j < \frac{1}{2m} \sum_{e \in E} c_e(v)
\Big( \sum_{j \in A^* : e \in {P_j}^*} \frac{r_j}{u(e)} \Big)
\koz.$$
\noindent The sum of the bandwidths reserved by \textsc{OPT} is
at most the available bandwidth $u(e)$ for each $e$, thus
$\sum_{j \in A^* : e \in P^*(j)} \frac{r_j}{u(e)}\le 1$.
Consequently,
$$ \sum_{j\in A^*} b_j \le \frac{1}{2m} \sum_{e \in E} c_e(v) \koz,$$
\noindent which inequality is the one which we wanted to prove.
\end{biz}
\begin{lemma} \label{onln7}
For the solution given by algorithm \textsc{EXP} the following
inequality holds:
$$ \frac{1}{2m} \sum_{e \in E} c_e(v) \le (1+ \lg \mu) \sum_{j\in A} b_j\koz.$$
\end{lemma}
\begin{biz}
It is enough to show that the inequality
$\sum_{e \in P_j} (c_e(j+1)-c_e(j)) \le 2mb_j \log_2 \mu$ is valid
for each request $j \in A$. On the other hand,
$$c_e(j+1)-c_e(j)=\mu^{l_e(j) +\frac{r_j}{u(e)}}-\mu^{l_e(j)}=\mu^{l_e(j)}(2^{\log_2 \mu \frac{r_j}{u(e)}} -1) \koz .$$
Since $2^x-1 1$, since in the opposite case the first item of the $(i+1)$-th bin fits
into the $i$-th bin which contradicts to the definition of the algorithm.
Therefore the total size of the items is more than $\lfloor m/2 \rfloor$.
On the other hand the optimal off-line algorithm cannot put items
with total size more than $1$ into the same bin, thus we obtain
that $n > \lfloor m/2 \rfloor$. This yields that $m\le 2n-1$, thus
$$\frac{\textsc{NF}(\sigma)}{\textsc{OPT}(\sigma)} \le
\frac{2n-1}{n} = 2-1/n \koz.$$
Consequently, we proved that the algorithm is asymptotically
$2$-competitive.
Now we prove that the bound is tight. Consider the following
sequence for each $n$ denoted by $\sigma_n$. The sequence contains
$4n-2$ items, the size of the $2i-1$-th item is $1/2$, the size
of the $2i$-th item is $1/(4n-2)$, $i=1,\dots,2n-1$. Algorithm
\textsc{NF} puts the $(2i-1)$-th and the $2i$-th items into the
$i$-th bin for each bin, thus $\textsc{NF}(\sigma_n)= 2n-1$. The
optimal off-line algorithm puts pairs of $1/2$ size items into the
first $n-1$ bins and it puts one $1/2$ size item and the small
items into the $n$-th bin, thus $\textsc{OPT}(\sigma_n)=n$. Since
$\textsc{NF}(\sigma_n)/\textsc{OPT}(\sigma_n)=2-1/n$ and this
function tends to $2$ as $n$ tends to $\infty$, we proved
that the asymptotic competitive ratio of the algorithm is at least $2$.
\end{biz}
If $k>1$, then there are better algorithms than \textsc{NF} for
the $k$-bounded space model. The best known bounded space on-line
algorithms belong to the family of \ki{harmonic
algorithms,}\inddef{harmonic algorithm} where the basic idea is
that the interval $(0,1]$ is divided into subintervals and each
item has a type which is the subinterval of its size. The items of
the different types are packed into different bins. The
algorithm runs several \textsc{NF} algorithms simultaneously; each
for the items of a certain type. \\
\textbf{Algorithm \textsc{First-Fit} and the weight function technique}
In this section we present the weight function technique which is
often used in the analysis of the bin packing algorithms. We show
this method by analysing algorithm \textsc{First-Fit}
(\textsc{FF}). \inddef{\textsc{First-Fit} algorithm}
Algorithm \textsc{FF} can be used when the number of open bins is
not bounded. The algorithm puts the item into the first opened bin
where it fits. If the item does not fit into any of the bins, then
a new bin is opened and the algorithm puts the item into it. The
pseudocode of the algorithm is also presented in the chapter of
memory management. The asymptotic competitive ratio of the
algorithm is bounded above by the following theorem.
\begin{tetel}\label{onll2} \textsc{FF} is asymptotically 1.7-competitive.
\end{tetel}
\begin{biz} In the proof we use the weight function technique
whose idea is that a weight is assigned to each item to measure in
some way how difficult it can be to pack the certain item. The
weight function and the total size of the items are used to bound
the off-line and on-line objective function values. We use the following weight function:
$$w(x)= \begin{cases}
6x/5, & \text {if $0\le x\le 1/6$} \\
9x/5-1/10, & \text {if $1/6 \le x \le 1/3$} \\
6x/5 + 1/10, & \text {if $1/3 \le x \le 1/2$} \\
6x/5+ 2/5, & \text {if $1/2 < x \koz .$}
\end{cases}
$$
Let $w(H)=\sum_{i\in H} w(a_i)$ for any set $H$ of items. The
properties of the weight function are summarised in the following
two lemmas. Both lemmas can be proven by case disjunction based on
the sizes of the possible items. The proofs are long and contain
many technical details, therefore we omit them.
\begin{lemma}\label{onll3} If $\sum_{i \in H} a_i \le 1$ is valid for a set $H$ of items, then
$w(H)\le 17/10$ also holds.
\end{lemma}
\begin{lemma}\label{onll4} For an arbitrary list $L$ of items $w(L) \ge \textsc{FF}(L) -2$.
\end{lemma}
Using these lemmas we can prove that the algorithm is
asymptotically 1.7-competitive. Consider an arbitrary list $L$ of
items. The optimal off-line algorithm can pack the items of the
list into $\textsc{OPT}(L)$ bins. The algorithm packs items with
total size at most $1$ into each bin, thus from Lemma \ref{onll3}
it follows that $w(L) \le 1.7 \textsc{OPT}(L)$. On the other hand
considering Lemma \ref{onll4} we obtain that $\textsc{FF}(L)-2 \le
w(L)$, which yields that $FF(L) \le 1.7 \textsc{OPT}(L)+2$, and
this inequality proves that the algorithm is asymptotically
1.7-competitive.
\end{biz}
It is important to note that the bound is tight, i.e. it is also
true that the asymptotic competitive ratio of \textsc{FF} is
$1.7$. Many algorithms have been developed with smaller asymptotic
competitive ratio than $17/10$, the best algorithm known at
present time is asymptotically $1.5888$-competitive.
\smallskip
\textbf{Lower bounds}
In this part we consider the techniques for proving lower bounds
on the possible asymptotic competitive ratio. First we present a
simple lower bound and then we show how the idea of the proof can
be extended into a general method.
\begin{tetel}\label{onll5}
No on-line algorithm for the bin packing problem can have smaller asymptotic competitive ratio
than 4/3.
\end{tetel}
\begin{biz} Let $A$ be an arbitrary on-line algorithm. Consider the following sequence of items.
Let $\varepsilon<1/12$ and $L_1$ be a list of $n$ items of size
$1/3+ \varepsilon$, and $L_2$ be a list of $n$ items of size
$1/2+\varepsilon$. The input is started by $L_1$. Then $A$ packs
two items or one item into the bins. Denote the number of bins
containing two items by $k$. In this case the number of the used
bins is $A(L_1)=k+n-2k=n-k$. On the other hand, the optimal
off-line algorithm can pack pairs of items into the bins, thus
OPT$(L_1)=\lceil n/2 \rceil$.
Now suppose that the input is the combined list $L_1L_2$. The
algorithm is an on-line algorithm, therefore it does not know
whether the input is $L_1$ or $L_1L_2$ at the beginning, thus it
also uses $k$ bins for packing two items from the part $L_1$.
Therefore among the items of size $1/2+\varepsilon$ only $n-2k$
can be paired with earlier items and the other ones need separate
bin. Thus A$(L_1L_2)\ge n-k +(n-(n-2k))=n+k$. On the other hand,
the optimal off-line algorithm can pack a smaller (size
$1/3+\varepsilon$) item and a larger (size $1/2+\varepsilon$) item
into each bin, thus OPT$(L_1L_2)=n$.
So we obtained that there is a list for algorithm $A$ where
$$\mbox{A}(L)/\mbox{OPT}(L) \ge {\rm max} \left \{\frac{n-k}{n/2}, \frac{n+k}{n} \right \}\ge 4/3 \koz.$$
Moreover for the above constructed lists OPT$(L)$ is at least
$\lceil n/2 \rceil $, which can be arbitrarily great. This yields
that the above inequality proves that the asymptotic competitive
ratio of A is at least $4/3$, and this is what we wanted to
prove.
\end{biz}
The fundamental idea of the above proof is that a long sequence
(in this proof $L_1L_2$) is considered, and depending on the
behaviour of the algorithm a prefix of the sequence is selected as
input for which the ratio of the costs is maximal. It is an
evident extension to consider more difficult sequences. Many lower
bounds have been proven based on different sequences. On the
other hand, the computations which are necessary to analyse the
sequence have become more and more difficult. Below we show how
the analysis of such sequences can be interpreted as a mixed
integer programming problem, which makes it possible to use
computers to develop lower bounds.
Consider the following sequence of items. Let $L=L_1L_2\dots L_k$,
where $L_i$ contains $n_i=\alpha_in$ identical items of size
$a_i$. If algorithm $A$ is asymptotically $C$-competitive, then
the inequality
$$C \ge \limsup_{n\to \infty}
\frac{\mbox{\textsc{A}}(L_1\dots L_j)}{\mbox{\textsc{OPT}}(L_1\dots L_j)} $$
\noindent is valid for each $j$. It is enough to consider an
algorithm for which the technique can achieve the minimal lower
bound, thus our aim is to determine the value
$$R={\rm min}_\textsc{A} {\rm max}_{j=1,\dots,k}
\limsup_{n\to \infty}
\frac{\textsc{A}(L_1\dots L_j)}{\textsc{OPT}(L_1\dots L_j)} \koz,$$
\noindent which value gives a lower bound on the possible
asymptotic competitive ratio. We can determine this value as an
optimal solution of a mixed integer programming problem. To define
this problem we need the following definitions.
The contain of a bin can be described by the packing pattern of
the bin,\inddef{packing pattern} which gives that how many
elements are contained in the bin from each subsequence. Formally,
a \ki{packing pattern} is a $k$-dimensional vector
$(p_1,\dots,p_k)$, where coordinate $p_j$ is the number of
elements contained in the bin from subsequence $L_j$. For the
packing patterns the constraint $\sum_{j=1}^k a_jp_j \le 1$ must
hold. (This constraint ensures that the items described by the
packing pattern fit into the bin.)
Classify the set $T$ of the possible packing patterns. For each
$j$ let $T_j$ be the set of the patterns for which the first
positive coordinate is the $j$-th one. (Pattern $p$ belongs to
class $T_j$ if $p_i=0$ for each $i0$.)
Consider the packing produced by A. Each bin is packed by some
packing pattern, therefore the packing can be described by the
packing patterns. For each $p\in T$ denote the number of bins
which are packed by the pattern $p$ by $n(p)$. The packing
produced by the algorithm is given by variables $n(p)$.
Observe that the bins which are packed by a pattern from class
$T_j$ receive their first element from subsequence $L_j$.
Therefore we obtain that the number of bins opened by A to pack
the elements of subsequence $L_1\dots L_j$ can be given by
variables $n(p)$ as follows:
$$\mbox{A}(L_1\dots L_j)=\sum_{i=1}^j \sum_{p\in T_i} n(p) \koz .$$
Consequently, for a given $n$ the required value $R$ can be
determined by the solution of the following mixed integer
programming problem.
\hskip 0.01cm
{\rm {\bf Min} $R$}
\smallskip
$\sum_{p\in T} p_j n(p)=n_j$, \hskip 5.2cm $1\le j \le k,$
\smallskip
$\sum_{i=1}^j \sum_{p\in T_i} n(p) \le R \cdot \mbox{OPT}(L_1\dots L_j)$,
\hskip 2.1cm $1\le j\le k,$
\smallskip
$n(p) \in \{0,1,\dots \}$, \hskip 5.4cm $p\in T.$
\medskip
The first $k$ constraints describe that the algorithm has to pack
all items. The second $k$ constraints describe that $R$ is at
least as large as the ratio of the on-line and off-line costs for
the subsequences considered.
The set $T$ of the possible packing patterns and also the optimal
solutions $OPT(L_1\dots L_j)$ can be determined by the list
$L_1L_2 \dots L_k$.
In this problem the number and the value of the
variables can be large, thus instead of the problem its linear
programming relaxation is considered. Moreover, we are interested
in the solution under the assumption that $n$ tends to $\infty$
and it can be proven that the integer programming and the linear
programming relaxation give the same bound in this case.
The best currently known bound was proven by this method and it
states that no on-line algorithm can have smaller asymptotic
competitive ratio than $1.5401$.
\subsection{Multidimensional models}
The bin packing problem has three different multidimensional
generalisations: the vector packing, the box packing and the strip
packing models. We consider only the strip packing problem in
details. For the other generalisations we give only the model.
In the \ki{vector packing problem}\inddef{vector packing problem}
the input is a list of $d$-dimensional vectors, and the algorithm
has to pack these vectors into the minimal number of bins. A
packing is legal for a bin if for each coordinate the sum of the
values of the elements packed into the bin is at most $1$. In the
on-line version the vectors are coming one by one and the
algorithm has to assign the vectors to the bins without any
information about the further vectors. In the \ki{box packing
problem}\inddef{box packing problem} the input is a list of
$d$-dimensional boxes and the goal is to pack the items into the
minimal number of d-dimensional unit cube without overlapping. In
the on-line version the items are coming one by one and the
algorithm has to pack them into the cubes without any information
about the further items. \\
\textbf{On-line strip packing}
In the \ki{strip packing problem}\inddef{strip packing problem}
there is a set of two dimensional rectangles, defined by their
widths and heights, and the task is to pack them into a vertical
strip of width $w$ without rotation minimising the total height
of the strip. We assume that the widths of the rectangles is at
most $w$ and the heights of the rectangles is at most $1$. This
problem appears in many situations. Usually, scheduling of tasks
with shared resource involves two dimensions, namely the resource and the
time. We can consider the widths as the resources and the heights
as the times. Our goal is to minimise the total amount of time
used. Some applications can be found in computer scheduling
problems. We consider the on-line version where the rectangles
arrive from a list one by one and we have to pack each rectangle
into the vertical strip without any information about the further
items. Most of the algorithms developed for the strip packing
problem belong to the class of shelf algorithms. We consider this
family of algorithms below. \\
\textbf{\textsc{Shelf} algorithms\inddef{\textsc{Shelf} algorithms}}
A basic way of packing into the strip is to define shelves and
pack the rectangles onto the shelves. By \ki{shelf} we mean a
rectangular part of the strip. Shelf packing algorithms place each
rectangle onto one of the shelves. If the algorithm decides which
shelf will contain the rectangle, then the rectangle is placed
onto the shelf as much to the left as it is possible without
overlapping the other rectangles placed onto the shelf earlier.
Therefore, after the arrival of a rectangle, the
algorithm has to make two decisions. The first decision is whether
to create a new shelf or not. If the algorithm creates a new shelf,
it also has to decide the height of the new shelf. The created
shelves always start from the top of the previous shelf. The first
shelf is placed to the bottom of the strip. The algorithm also has
to choose which shelf to put the rectangle onto. Hereafter we
will say that it is possible to pack a rectangle onto
a shelf, if there is enough room for the rectangle on the shelf.
It is obvious that if a rectangle is higher than a shelf, we cannot
place it onto the shelf.
We consider only one algorithm in details. This algorithm was
developed and analysed by Baker and Schwarz\nevindex{Baker, S.
Brenda}\nevindex{Schwarz, S. Jerald} in 1983 and it is called
\textsc{Next-Fit-Shelf} ($\textsc{NFS}_r$) \inddef{NFSr@\textsc{NFS}$_r$
algorithm} algorithm. The algorithm depends on a parameter $r<1.$
For each $j$ there is at most one active shelf with height $r^j.$
We define how the algorithm works below.
After the arrival of a rectangle $p_i=(w_i,h_i)$ choose a value
for $k$ which satisfies $r^{k+1} < h_i \le r^k.$ If there is an
active shelf with height $r^k$ and it is possible to pack the
rectangle onto it, then pack it there. If there is no active shelf
with height $r^k$, or it is not possible to pack the rectangle
onto the active shelf with height $r^k$, then create a new shelf
with height $r^k$, put the rectangle onto it, and let this new
shelf be the active shelf with height $r^k$ (if we had an
active shelf with height $r^k$ earlier, then we close it).
\begin{pelda} Let $r=1/2$. Suppose that the size of the first item is $(w/2,3/4)$.
Therefore, it is assigned to a shelf of height $1$. We define a shelf of
height $1$ at the bottom of the strip; this will be the
active shelf with height $1$ and we place the item into the left
corner of this shelf. Suppose that the size of the next item is
$(3w/4,1/4)$. In this case it is placed onto a shelf of height $1/4$.
There is no active shelf with this height so we define a new shelf
of height $1/4$ on the top of the previous shelf. This will be the
active shelf of height $1/4$ and the item is placed onto its left
corner. Suppose that the size of the next item is $(3w/4,5/8)$.
This item is placed onto a shelf of height $1$.
It is not possible to pack it onto the active shelf, thus we close the active shelf and we define a new shelf of
height $1$ on the top of the previous shelf. This will be the active shelf of
height $1$ and the item is placed into its left corner.
Suppose that the size of the next item is $(w/8,3/16)$. This
item is placed onto a shelf of height $1/4$. We can pack it onto
the active shelf of height $1/4$, thus we pack it onto that shelf
as left as it is possible.
\end{pelda}
For the competitive ratio of $\textsc{NFS}_r$ the following statements are valid.
\begin{tetel}\label{onll6}
Algorithm $\textsc{NFS}_r$ is $\bigl(\frac{2}{r}+
\frac{1}{r(1-r)}\bigr)$-competitive. Algorithm $\textsc{NFS}_r$ is
asymptotically $2/r$-competitive.
\end{tetel}
\begin{biz}
First we prove that the algorithm is $\bigl(\frac{2}{r}+
\frac{1}{r(1-r)}\bigr)$-competitive. Consider an arbitrary list of
rectangles and denote it by $L$. Let $H_A$ denote the sum of the
heights of the shelves which are active at the end of the packing,
and let $H_C$ be the sum of the heights of the other shelves. Let
$h$ be the height of the highest active shelf ($h=r^j$ for some $j$),
and let $H$ be the height of the highest rectangle. Since the
algorithm created a shelf with height $h$, we have
$H > rh$. As there is at most $1$ active shelf for each height,
$$H_A \le h \sum_{i=0}^{\infty} r^{i} =\frac{ h}{1-r} \le \frac{H}{r(1-r)}\koz.$$
Consider the shelves which are not active at the end. Consider the
shelves of height $hr^i$ for each $i$, and denote the number of the
closed shelves by $n_i$. Let $S$ be one of these shelves with
height $hr^i$. The next shelf $S'$ with height $hr^i$ contains one
rectangle which would not fit onto $S$. Therefore, the total
width of the rectangles is at least $w$. Furthermore, the height of
these rectangles is at least $hr^{i+1}$, thus the total area of
the rectangles packed onto $S$ and $S'$ is at least $hwr^{i+1}$.
If we pair the shelves of height $hr^i$ for each $i$ in this way,
using the active shelf if the number of the shelves of the
considered height is odd, we obtain that the total area of the
rectangles assigned to the shelves of height $hr^{i}$ is at least
$wn_ihr^{i+1}/2$. Thus the total area of the rectangles is at
least $\sum_{i=0}^{\infty}wn_ihr^{i+1}/2$, and this yields that
\textsc{OPT}$(L)\ge \sum_{i=0}^{\infty}n_ihr^{i+1}/2$. On the
other hand, the total height of the closed shelves is
$H_Z=\sum_{i=0}^{\infty}n_ihr^{i}$, and we obtain that $H_Z\le 2
\mbox{\textsc{OPT}}(L)/r$.
Since \textsc{OPT}$(L)\ge H$ is valid we proved the required inequality
$$\mbox{\textsc{NFS}}_r(L) \le \mbox{\textsc{OPT}}(L)(2/r+ 1/r(1-r)) \koz.$$
Since the heights of the rectangles are bounded by $1$, $H$
and $H_A$ are bounded by a constant, so we obtain the
result about the asymptotic competitive ratio immediately.
\end{biz}
Besides this algorithm some other shelf algorithms have been
investigated for the solution of the problem. We can interpret the
basic idea of $\textsc{NFS}_r$ as follows. We define classes of
items belonging to types of shelves, and the rectangles assigned to
the classes are packed by the classical bin packing algorithm
\textsc{NF}. It is an evident idea to use other bin packing
algorithms. The best shelf algorithm known at present time was developed by
Csirik\nevindex{Csirik, János} and Woeginger\nevindex{Woeginger, J.
Gerhard} in 1997. That algorithm uses the harmonic bin packing
algorithm to pack the rectangles assigned to the classes.
\begin{gyak}
\ujgyak Suppose that the size of the items is bounded above by
$1/3$. Prove that under this assumption the asymptotic competitive
ratio of \textsc{NF}\gyakindex{NF} is $3/2.$
\ujgyak Suppose that the size of
the items is bounded above by $1/3.$ Prove Lemma \ref{onll4} under
this assumption.
\ujgyak Suppose that the sequence of items is
given by a list $L_1L_2L_3,$ where $L_1$ contains $n$ items of
size $1/2$, $L_2$ contains $n$ items of size $1/3$, $L_3$
contains $n$ items of size $1/4$. Which packing patterns can be
used? Which patterns belong to class $T_2$?
\ujgyak Consider the version of the strip packing problem where
one can lengthen the rectangles keeping the area fixed. Consider
the extension of $\textsc{NFS}_r$\gyakindex{NFSr@\textsc{NFS}$_r$ algorithm} which lengthen the rectangles
before the packing to the same height as the shelf which is
chosen to pack them onto. Prove that this algorithm is $2+ \frac{1}{r(1-r)}$-competitive.
\end{gyak}
\section{On-line scheduling}
The area of scheduling theory has a huge literature. The first
result in on-line scheduling belongs to Graham,\nevindex{Graham,
Ronald Lewis} who analysed the List scheduling algorithm in 1966.
We can say that despite of the fact that
Graham\nevindex{Graham, Ronald Lewis} did not use the terminology
which was developed in the area of the on-line algorithms, and he
did not consider the algorithm as an on-line algorithm, he
analysed it as an approximation algorithm.
From the area of scheduling we only recall the definitions which
are used in this chapter.
In a scheduling problem we have to find an optimal schedule of
jobs. We consider the parallel machines case, where $m$ machines are given,
and we can use them to schedule the jobs.
In the most fundamental model each job has a known
processing time and to schedule the job we have to assign it to a
machine, and we have to give its starting time and a completion
time, where the difference between the completion time and the
starting time is the processing time. No machine may
simultaneously run two jobs.
Concerning the machine environment three different models are
considered. If the processing time of a job is the same for each
machine, then we call the machines identical machines. If each
machine has a speed $s_i$, the jobs have a processing weight $p_j$
and the processing time of job $j$ on the $i$-th machine is
$p_j/s_i$, then we call the machines related machines. If the
processing time of job $j$ is given by an arbitrary positive
vector $P_j=(p_j(1),\dots,p_j(m))$, where the processing time of
the job on the $i$-th machine is $p_j(i)$, then we call the
machines unrelated machines.
Many objective functions are considered for scheduling problems,
but here we consider only such models where the goal is the
minimisation of the makespan (the maximal completion time).
In the next subsection we define the two most fundamental on-line
scheduling models, and in the following two subsections we
consider these models in details.
\subsection{On-line scheduling models}
Probably the following models are the most fundamental examples of
on-line machine scheduling problems.
\bigskip
\textbf{LIST model}\inddef{LIST on-line scheduling model}
\medskip
In this model we have a fixed number of machines denoted by
$M_1,M_2,\dots,M_m$, and the jobs arrive from a list. This means
that the jobs and their processing times are revealed to the
on-line algorithm one by one. When a job is revealed, the on-line
algorithm has to assign the job to a machine with a
starting time and a completion time irrevocably.
By the \ki{load}\inddef{load} of a machine, we mean the sum of
the processing times of all jobs assigned to the machine. Since
the objective function is to minimise the maximal completion time,
it is enough to consider the schedules where the jobs are
scheduled on the machines without idle time. For these schedules
the maximal completion time is the load for each machine.
Therefore this scheduling problem is reduced to a load balancing
problem, i.e. the algorithm has to assign the jobs to the machines
minimising the maximum load, which is the makespan in this case.
\begin{pelda} Consider the LIST model and two identical machines.
Consider the following sequence of jobs where the jobs are given
by their processing time: $I=\{j_1=4,j_2=3,j_3=2,j_4=5 \}$. The
on-line algorithm first receives job $j_1$ from the list, and the
algorithm has to assign this job to one of the machines. Suppose
that the job is assigned to machine $M_1$. After that the on-line
algorithm receives job $j_2$ from the list, and the algorithm has to
assign this job to one of the machines. Suppose that the job is
assigned to machine $M_2$. After that the on-line algorithm receives job
$j_3$ from the list, and the algorithm has to assign this job to one
of the machines. Suppose that the job is assigned to machine
$M_2$. Finally, the on-line algorithm receives job $j_4$ from the
list, and the algorithm has to assign this job to one of the machines.
Suppose that the job is assigned to machine $M_1$. Then the loads
on the machines are $4+5$ and $3+2$, and we can give a schedule
where the maximal completion times on the machines are the loads:
we can schedule the jobs on the first machine in time
intervals $(0,4)$ and $(4,9)$, and we can schedule the jobs on the
second machine in time intervals $(0,3)$ and $(3,5)$.
\end{pelda}
\bigskip
\textbf{TIME model}\inddef{TIME on-line scheduling model}
\medskip
In this model there are a fixed number of machines again. Each job
has a processing time and a \ki{release time.} \inddef{release
time} A job is revealed to the on-line algorithm at its release
time. For each job the on-line algorithm has to choose which machine
it will run on and assign a start time. No machine may
simultaneously run two jobs. Note that the algorithm is not
required to assign a job immediately at its release time. However,
if the on-line algorithm assigns a job at time $t$ then it cannot
use information about jobs released after time $t$ and it cannot
start the job before time $t$. Our aim is to minimise the
makespan.
\begin{pelda} Consider the TIME model with two related machines.
Let $M_1$ be the first machine with speed $1$, and $M_2$ be the second
machine with speed $2$. Consider the following input
$I=\{j_1=(1,0),j_2=(1,1),j_3=(1,1),j_4=(1,1)\}$, where the jobs
are given by the (processing time, release time) pairs. Thus a
job arrives at time $0$ with processing time $1$, and the algorithm
can either start to process it on one of the machines or wait for
jobs with larger processing time. Suppose that the algorithm waits
till time $1/2$ and then it starts to process the job on machine
$M_1$. The completion time of the job is $3/2$. At time $1$ three
further jobs arrive, and at that time only $M_2$ can be used. Suppose
that the algorithm starts to process job $j_2$ on this machine. At
time $3/2$ both jobs are completed. Suppose that the remaining
jobs are started on machines $M_1$ and $M_2$, and the completion times
are $5/2$ and $2$, thus the makespan achieved by the algorithm is
$5/2$. Observe that an algorithm which starts the first job
immediately at time $0$ can make a better schedule with makespan
$2$. But it is important to note that in some cases it can be
useful to wait for larger jobs before starting a job.
\end{pelda}
\subsection{LIST model}
The first algorithm in this model has been developed by Graham.
Algorithm \textsc{LIST} assigns each job to the machine where the
actual load is minimal. If there are more machines with this
property, it uses the machine with the smallest index. This means
that the algorithm tries to balance the loads on the machines. The
competitive ratio of this algorithm is determined by the following
theorem.
\begin{tetel}\label{onlu1} The competitive ratio of algorithm \textsc{LIST} is
$2-1/m$ in the case of identical machines.
\end{tetel}
\begin{biz} First we prove that the algorithm is $2-1/m$-competitive.
Consider an arbitrary input sequence denoted by
$\sigma=\{j_1,\dots,j_n\}$, and denote the processing times by
$p_1,\dots,p_n$. Consider the schedule produced by \textsc{LIST}.
Let $j_l$ be a job with maximal completion time. Investigate the
starting time $S_l$ of this job. Since \textsc{LIST} chooses the
machine with minimal load, thus the load was at least $S_l$ on
each of the machines when $j_l$ was scheduled. Therefore we
obtain that
$$S_l \le {1\over m}\sum_{{j=1}\atop {j\not =l}}^n p_j= {1\over m} (\sum_{j=1}^n p_j -
p_l)={1\over m}(\sum _{j=1}^np_j)-{1\over m}p_l \koz.$$
\noindent This yields that
$$\mbox{LIST}(\sigma) =S_l+p_l\le {1\over m} (\sum_{j=1}^n p_j) + {{m-1}\over m} p_l \koz.$$
On the other hand, \textsc{OPT} also processes all of the jobs, thus
we obtain that OPT$(\sigma) \ge {1\over m} (\sum_{j=1}^n p_j) $.
Furthermore, $p_l$ is scheduled on one of the machines of
\textsc{OPT}, thus OPT$(\sigma) \ge p_l$. Due to these bounds we
obtain that
$$LIST( \sigma) \le (1+{{m-1}\over m} )\mbox{OPT}(\sigma) \koz ,$$
\noindent which inequality proves that \textsc{LIST} is
$2-1/m$-competitive.
\smallskip
Now we prove that the bound is tight. Consider the following
input. It contains $m(m-1)$ jobs with processing time $1/m$ and
one job with processing time $1$. \textsc{LIST} assigns
$m-1$ small jobs to each machine and the last large job is
assigned to $M_1$. Therefore its makespan is $1+(m-1)/m$. On the
other hand, the optimal algorithm schedules the large job on $M_1$
and $m$ small jobs on the other machines, and its makespan is $1$.
Thus the ratio of the makespans is $2-1/m$ which shows that the
competitive ratio of \textsc{LIST} is at least $2-1/m$.
\end{biz}
Although it is hard to imagine any other algorithm for the on-line case,
many other algorithms have been developed. The competitive ratios
of the better algorithms tend to smaller numbers than $2$ as the
number of machines tends to $\infty$. Most of these algorithms are
based on the following idea. The jobs are scheduled keeping the
load uniformly on most of the machines, but in contrast to
\textsc{LIST}, the loads are kept low on some of the machines,
keeping the possibility of using these machines for scheduling large
jobs which may arrive later.
Below we consider the more general cases where the machines are
not identical. \textsc{LIST} may perform very badly, and the
processing time of a job can be very large on the machine where
the actual load is minimal. However, we can easily change the greedy
idea of \textsc{LIST} as follows. The extended algorithm is called
\textsc{Greedy} and it assigns the job to the machine where the
load with the processing time of the job is minimal. If there
are several machines which have minimal value, then the algorithm
chooses the machine where the processing time of the
job is minimal from them, if there are several machines with this property, the
algorithm chooses the one with the smallest index from them.
\begin{pelda}
Consider the case of related machines where there are $3$ machines
$M_1,M_2,M_3$ and the speeds are $s_1=s_2=1$, $s_2=3$. Suppose
that the input is $I=\{j_1=2,j_2=1,j_3=1,j_4=3,j_5=2 \}$, where
the jobs are defined by their processing weight. The load
after the first job is $2/3$ on machine $M_3$ and $2$ on the other
machines, thus $j_1$ is assigned to $M_3$. The load after job
$j_2$ is $1$ on all of the machines, and its processing time is
minimal on machine $M_3$, thus \textsc{Greedy} assigns it to
$M_3$. The load after job $j_3$ is $1$ on $M_1$ and $M_2$, and
$4/3$ on $M_3$, thus the job is assigned to $M_1$. The load after
job $j_4$ is $4$ on $M_1$, $3$ on $M_2$, and $2$ on $M_3$, thus
the job is assigned to $M_3$. Finally, the load after job $j_5$ is
$3$ on $M_1$, $2$ on and $M_2$, and $8/3$ on $M_3$, thus the job
is assigned to $M_2$.
\end{pelda}
\begin{pelda}
Consider the case of unrelated machines with two machines and the
following input: $I=\{j_1=(1,2),j_2=(1,2),j_3=(1,3),j_4=(1,3)\}$,
where the jobs are defined by the vectors of processing times. The
load after job $j_1$ is $1$ on $M_1$ and $2$ on $M_2$, thus the
job is assigned to $M_1$. The load after job $j_2$ is $2$ on $M_1$
and also on $M_2$, thus the job is assigned to $M_1$, because it
has smaller processing time. The load after job $j_3$ is $3$
on $M_1$ and $M_2$, thus the job is assigned to $M_1$ because it
has smaller processing time. Finally, the load after job
$j_4$ is $4$ on $M_1$ and $3$ on $M_2$, thus the job is assigned
to $M_2$.
\end{pelda}
The competitive ratio of the algorithm is determined by the following theorems.
\begin{tetel}\label{onlu1} The competitive ratio of algorithm \textsc{Greedy} is
$m$ in the case of unrelated machines.
\end{tetel}
\begin{biz} First we prove that the competitive ratio of the algorithm is at
least $m$. Consider the following input sequence. Let $\varepsilon
>0$ be an arbitrarily small number. The sequence contains $m$ jobs. The
processing time of job $j_1$ is $1$ on machine $M_1$, $1+
\varepsilon$ on machine $M_m$, and $\infty$ on the other
machines, ($p_1(1)=1,p_1(i)=\infty, i=2,\dots,m-1,
p_1(m)=1+\varepsilon$). For job $j_i$, $i=2,\dots, m$ the
processing time is $i$ on machine
$M_i$, $1+\varepsilon$ on machine $M_{i-1}$ and
$\infty$ on the other machines ($p_i(i-1)=1+\varepsilon$, $p_i(i)=i$,
$p_i(j)=\infty$, if $j\not=i-1$ and $j \not=i$).
In this case job $j_i$ is scheduled on $M_i$ by \textsc{Greedy} and the
makespan is $m$. On the other hand, the optimal off-line algorithm
schedules $j_1$ on $M_m$ and $j_i$ is scheduled on $M_{i-1}$ for
the other jobs, thus the optimal makespan is $1+\varepsilon$. The
ratio of the makespans is $m/(1+\varepsilon)$. This ratio tends to
$m$, as $\varepsilon$ tends to $0$, and this proves that the
competitive ratio of the algorithm is at least $m$.
Now we prove that the algorithm is $m$-competitive. Consider an
arbitrary input sequence, denote the makespan in the optimal
off-line schedule by $L^*$ and let $L(k)$ denote the maximal load
in the schedule produced by \textsc{Greedy} after scheduling the
first $k$ jobs. Since the processing time of the $i$-th job is at
least ${\rm min}_j p_i(j)$, and the load is at most $L^*$ on the
machines in the off-line optimal schedule, we obtain that $m
L^* \ge \sum_{i=1}^{n} {\rm min}_j p_i(j)$.
We prove by induction that the inequality $L(k) \le \sum_{i=1}^{k}
{\rm min}_j p_i(j)$ is valid. Since the first job is assigned to
the machine where its processing time is minimal, the
statement is obviously true for $k=1$. Let $1\le k let $H_i$ be the set of the unscheduled jobs released till $t_i$ \\
3 \' \> let $OFF_i$ be an optimal off-line schedule of the jobs of $H_i$ \\
4 \' \> schedule the jobs as it is determined by $OFF_i$ starting the schedule at $t_i$ \\
5 \' \> let $q_i$ be the maximal completion time \\
6 \' \> \key{if} \= a new job is released in time interval $(t_i,q_i]$ or the sequence is ended \\
7 \' \> \> \key{then} \= $t_{i+1} \leftarrow q_i$ \\
7 \' \> \> \key{else} \> let $t_{i+1}$ be the release time of the next job \\
8 \' \> $i \leftarrow i+1$
\end{alg}
\begin{pelda}
Consider two identical machines. Suppose that the sequence of jobs
is $I=\{j_1=(1,0),j_2=(1/2,0),j_3=(1/2,0),j_4=(1,3/2),j_5=(1,3/2),
j_6=(2,2)\}$, where the jobs are defined by the (processing time,
release time) pairs. In the first iteration $j_1,j_2,j_3$ are
scheduled: an optimal off-line algorithm schedules $j_1$ on
machine $M_1$ and $j_2,j_3$ on machine $M_2$, so the jobs are
completed at time $1$. Since no new job have been released in the time
interval $(0,1]$, the algorithm waits for a new job until time
$3/2$. Then the second iteration starts: $j_4$ and $j_5$ are
scheduled on $M_1$ and $M_2$ respectively in the time interval
$[3/2,5/2)$. During this time interval $j_6$ has been released thus at time
$5/2$ the next iteration starts and \textsc{INTV} schedules $j_6$
on $M_1$ in the time interval $[5/2,9/2]$.
\end{pelda}
The following statement holds for the competitive ratio of
algorithm \textsc{INTV}.
\begin{tetel}\label{onlu5}
In the TIME model algorithm \textsc{INTV} is 2-competitive.
\end{tetel}
\begin{biz} Consider an arbitrary input and the schedule produced by
\textsc{INTV}. Denote the number of iterations by $i$. Let
$T_3=t_{i+1}-t_{i}$, $T_2=t_{i}-t_{i-1}$, $T_1=t_{i-1}$ and let
$T_{\mbox{OPT}}$ denote the optimal off-line cost. In this case $T_2 \le
T_{\mbox{OPT}}$. This inequality is obvious if $t_{i+1}\not=q_{i}$. If
$t_{i+1}=q_{i}$, then the inequality
holds, because also the optimal off-line algorithm has to
schedule the jobs which are scheduled in the $i$-th
iteration by \textsc{INTV}, and \textsc{INTV} uses an optimal off-line schedule in each iteration.
On the other hand, $T_1+T_3 \le T_{\mbox{OPT}}$. To prove this inequality
first observe that the release time is at least $T_1=t_{i-1}$ for
the jobs scheduled in the $i$-th iteration (otherwise the
algorithm would schedule them in the $i-1$-th iteration).
Therefore also the optimal algorithm has to schedule these jobs
after time $T_1$. On the the other hand, it takes at least
$T_3$ time units to process these jobs, because \textsc{INTV} uses optimal
off-line algorithm in the iterations. The makespan of the schedule
produced by \textsc{INTV} is $T_1+T_2+T_3$, and we have shown that
$T_1+T_2+T_3\le 2 T_{\mbox{OPT}}$, thus we proved that the algorithm is
$2$-competitive.
\end{biz}
Some other algorithms have also been developed in the TIME model.
Vestjens \nevindex{Vestjens, Arjen} proved that the \ki{on-line
\textsc{LPT}}\inddef{on-line \textsc{LPT}} algorithm is
$3/2$-competitive. This algorithm schedules the longest
unscheduled, released job at each time when some machine is
available. The following lower bound for the possible competitive
ratios of the on-line algorithms is also given by Vestjens.
\begin{tetel}\label{onlu6}
The competitive ratio of any on-line algorithm is at least
$1.3473$ in the TIME model for minimising the makespan.
\end{tetel}
\begin{biz}
Let $\alpha \approx 0.3473$ be the solution of the equation
$\alpha^3-3\alpha+1=0$ which belongs to the interval $[1/3,1/2]$.
We prove that no on-line algorithm can have smaller competitive
ratio than $1+ \alpha$. Consider an arbitrary on-line algorithm,
denote it by \textsc{ALG}. Investigate the following input
sequence.
At time $0$ one job arrives with processing time $1$. Let $S_1$ be
the time when the algorithm starts to process the job on one of
the machines. If $S_1
> \alpha$, then the sequence is finished and
$\mbox{\textsc{ALG}}(I)/\mbox{\textsc{OPT}}(I)>1+ \alpha$, which proves the
statement. So we can suppose that $S_1 \le \alpha$.
The release time of the next job is $S_1$ and its processing time
is $\alpha/(1-\alpha)$. Denote its starting time by $S_2$. If $S_2
\le S_1+1-\alpha/(1-\alpha)$, then we end the sequence with $m-1$
jobs having release time $S_2$, and processing time
$1+\alpha/(1-\alpha)-S_2$. In this case an optimal off-line algorithm
schedules the first two jobs on the same machine and the last
$m-1$ jobs on the other machines starting them at time $S_2$, thus
its cost is $1+\alpha/(1-\alpha)$. On the other hand, the on-line
algorithm must schedule one of the last $m-1$ jobs after the
completion of the first or the second job, thus $\textsc{ALG}(I)
\ge 1+2 \alpha/(1-\alpha)$ in this case, which yields that the
competitive ratio of the algorithm is at least $1+ \alpha$.
Therefore we can suppose that $S_2> S_1+1-\alpha/(1-\alpha)$.
At time $S_1+1-\alpha/(1-\alpha)$ further $m-2$ jobs arrive
with processing times $\alpha/(1-\alpha)$ and one job with
processing time $1-\alpha/(1-\alpha)$. The optimal off-line
algorithm schedules the second and the last jobs on the same
machine, and the other jobs are scheduled one by one on the other machines
and the makespan of the schedule is $1+S_1$. Since before time
$S_1+1-\alpha/(1-\alpha)$ none of the last $m$ jobs is started by
\textsc{ALG}, after this time \textsc{ALG} must schedule at
least two jobs on one of the machines and the maximal completion
time is at least $S_1+2-\alpha/(1-\alpha)$. Since $S_1\le \alpha$,
the ratio $\mbox{\textsc{OPT}}(I)/\mbox{\textsc{ALG}}(I)$ is minimal if
$S_1=\alpha$, and in this case the ratio is $1+ \alpha$, which
proves the theorem.
\end{biz}
\begin{gyak}
\ujgyak Prove that the competitive ratio is at least $3/2$ for any
on-line algorithm in the case of two identical machines.
\ujgyak
Prove that \textsc{LIST}\gyakindex{LIST@\textsc{List} algorithm} is not constant competitive in the case of
unrelated machines.
\ujgyak Prove that the modification of
\textsc{INTV} which uses a $c$-approximation schedule (a schedule
with at most $c$ times more cost than the optimal cost)
instead of the optimal off-line schedule in each step is $2c$-competitive.
\end{gyak}
\begin{fld}
\ujfld{Paging problem}{Consider \label{onlf1} the special\felindex{paging problem} case of
the $k$-server problem, where the distance between each pair of
points is $1$. (This problem is equivalent with the on-line paging
problem.) Analyse the algorithm which serves the requests not
having server on their place by the server which was used least
recently. (This algorithm is equivalent with the \textsc{LRU}
paging algorithm.) Prove that the algorithm is $k$-competitive.}
\ujfld{\textsc{ALARM2}\felindex{Alarm@\textsc{Alarm} algorithm} algorithm} {Consider \label{onlf2} the
following alarming algorithm for the data acknowledgement problem.
\textsc{ALARM2} is obtained from the general definition with the
values $e_j=1/\vert \sigma_j \vert$. Prove that the algorithm is
not constant-competitive.}
\ujfld{Bin packing lower bound}{Prove,\label{onlf3} that no
on-line algorithm can have smaller competitive ratio than $3/2$
using a sequence which contains items of size $1/7+\varepsilon$,
$1/3+\varepsilon$, $1/2+\varepsilon$, where $\varepsilon$ is a
small positive number.}
\ujfld{Strip packing with modifiable rectangles}{Consider
\label{onlf4} the following version of the strip packing problem.
In the new model the algorithms are allowed to lengthen the
rectangles keeping the area fixed. Develop a $4$-competitive
algorithm for the solution of the problem.}
\ujfld{On-line \textsc{LPT} algorithm}{Consider \label{onlf5} the
algorithm in the TIME model\felindex{TIME model} which starts the longest
released job to schedule at each time when a machine is available. This
algorithm is called on-line \textsc{LPT}.\felindex{LPT@\textsc{Lpt}} Prove that the
algorithm is $3/2$-competitive.}
\end{fld}
\begin{fejmegj}
More details about the results on on-line algorithms can be found
in the books \cite{OnBoYa98, OnFiWo98}.
The first results about the $k$-server problem (Theorems
\ref{onls1} and \ref{onls2}) are published by Manasse,
\nevindex{Manasse, Mark} McGeoch \nevindex{McGeoch, Lyle} and
Sleator \nevindex{Sleator, Daniel} in \cite{OnMaGeSl90}. The
presented algorithm for the line (Theorem \ref{onls3}) was
developed by Chrobak\nevindex{Chrobak, Marek},
Karloff,\nevindex{Karloff, J. Howard} Payne\nevindex{Payne, Tom}
and Viswanathan\nevindex{Vishwanathan, Sundar} (see
\cite{OnChKaPa91}). Later Chrobak\nevindex{Chrobak, Marek} and
Larmore\nevindex{Larmore, Lawrence} extended
the algorithm for trees in \cite{OnChLa91}.
The first constant-competitive algorithm
for the general problem was developed by Fiat\nevindex{Fiat,
Amos}, Rabani\nevindex{Rabani, Yuval} and Ravid\nevindex{Ravid,
Yiftach} (see \cite{OnFiRaRa94}). The best known algorithm is
based on the work function technique. The first work function
algorithm for the problem was developed by
Chrobak \nevindex{Chrobak, Marek} and Larmore\nevindex{Larmore,
Lawrence} in \cite{OnChLa92}. Koutsoupias\nevindex{Koutsoupias, Elias} and
Papadimitriou\nevindex{Papadimitriou, Christos H.} have proven that the work function
algorithm is $2k-1$-competitive in \cite{OnKoPa95}.
The first mathematical model for the data acknowledgement problem
and the first results (Theorems \ref{onln1} and \ref{onln2}) are
presented by Dooly,\nevindex{Dooly, R. Dan}
Goldman,\nevindex{Goldman, A. Sally} and Scott\nevindex{Scott,
D. Stephen} in \cite{OnDoGoSc01}. Online algorithms with lookahead property are presented in
\cite{ImrehN2007}.\nevindex{Németh, Tamás}\nevindex{Imreh, Csanád} Albers\nevindex{Albers, Susanne} and
Bals\nevindex{Bals, Helge} considered a different objective
function in \cite{OnAlBa03}. Karlin \nevindex{Karlin, Anna R.}
Kenyon\nevindex{Kenyon, Claire} and Randall\nevindex{Randall,
Dana} investigated randomised algorithms for the data
acknowledgement problem in \cite{OnKaKeRa01}. The
\textsc{Landlord} algorithm was developed by
Young in \cite{OnYou02}. \nevindex{Young, Neal} The detailed description of the
results in the area of on-line routing can be found in the survey
\cite{OnLeo98} written by Leonardi\nevindex{Leonardi, Stefano}.
The exponential algorithm for the load balancing model is
investigated by Aspnes,\nevindex{Aspnes, James}
Azar,\nevindex{Azar, Yossi} Fiat,\nevindex{Fiat, Amos}
Plotkin\nevindex{Plotkin, Serge} and Waarts\nevindex{Waarts, Orli}
in \cite{OnAsAzFi97}. The exponential algorithm for the throughput
objective function is applied by Awerbuch,\nevindex{Awerbuch,
Baruch} Azar \nevindex{Azar, Yossi} and Plotkin
\nevindex{Plotkin, Serge} in \cite{OnAwAzPl93}.
A detailed survey about the theory of on-line bin packing is
written by Csirik\nevindex{Csirik, János} and
Woeginger\nevindex{Woeginger, J. Gerhard} (see \cite{OnCsWo98}).
The algorithms \textsc{NF} and \textsc{FF} are analysed with
competitive analysis by Johnson,\nevindex{Johnson, David S.}
Demers,\nevindex{Demers, Alan} Ullman,\nevindex{Ullman, Jeffrey
David} Garey\nevindex{Garey, Michael R.} and Graham\nevindex{Graham,
Ronald Lewis} in \cite{OnJo74,OnJoDeUl74}, further results can be
found in the PhD thesis of Johnson\nevindex{Johnson, David S.}
(\cite{OnJo73}). Our Theorem \ref{onll1} is a special case of
Theorem 1 in \cite{Ivanyi84}\nevindex{Iványi, Antal Miklós} and Theorem \ref{onll2} is a special case of Theorems
5.8 and 5.9 of the book \cite{Coffman76}\nevindex{Coffman, Ed G., Jr.}
and Corollary 20.13 in the twentieth chapter of this book
\cite{BaloghIvanyi07}.\nevindex{Balogh, Ádám}\nevindex{Iványi, Antal Miklós} Van Vliet\nevindex{van Vliet, André} applied
the packing patterns to prove lower bounds for the possible
competitive ratios in \cite{OnVli92, OnVl95}. For the on-line
strip packing problem algorithm \textsc{NFS}$_r$ was developed and
analysed by Baker\nevindex{Baker, S. Brenda} and
Schwarz\nevindex{Schwarz, S. Jerald} in \cite{OnBaSc82}. Later
further shelf packing algorithms were developed, the best shelf
packing algorithm for the strip packing problem was developed by
Csirik\nevindex{Csirik, János} and Woeginger\nevindex{Woeginger, J.
Gerhard} in \cite{OnCsWo97}.
A detailed survey about the results in the area of on-line
scheduling was written by Sgall \nevindex{Sgall, Jirí}
(\cite{OnSga98}). The first on-line result is the analysis of
algorithm \textsc{LIST}, it was published by
Graham\nevindex{Graham, Ronald Lewis}in \cite{OnGra66}. Many further algorithms
were developed and analysed for the case of identical machines, the
algorithm with smallest competitive ratio (tends to 1.9201 as the
number of machines tends to $\infty$) was developed by Fleischer
\nevindex{Fleischer, Rudolf} and Wahl\nevindex{Wahl, Michaela} in
\cite{OnFlWh00}. The lower bound for the competitive ratio of
\textsc{Greedy} in the related machines model was proved by
Cho\nevindex{Cho, Yookun} and Sahni \nevindex{Sahni, Sartaj} in
\cite{OnChSa80}. The upper bound, the related machines case and a
more sophisticated exponential function based algorithm were
presented by Aspnes,\nevindex{Aspnes, James}
Azar,\nevindex{Azar, Yossi} Fiat,\nevindex{Fiat, Amos}
Plotkin\nevindex{Plotkin, Serge} and Waarts\nevindex{Waarts, Orli}
in \cite{OnAsAzFi97}. A summary of further results about the
applications of the exponential function technique in the area of
on-line scheduling can be found in the paper of Azar
\nevindex{Azar, Yossi} (\cite{OnAza98}). The interval algorithm
presented in the TIME model and Theorem \ref{onlu5} are based on
the results of Shmoys,\nevindex{Shmoys, David B.} Wein\nevindex{Wein,
Joel} and Williamson\nevindex{Williamson, David P.} (see
\cite{OnShWeWi95}). A detailed description of further results
(on-line \textsc{LPT}, lower bounds) in the area TIME model can be
found in the PhD thesis of Vestjens \nevindex{Vestjens, Arjen}
\cite{OnVes97}. We presented only the most fundamental on-line
scheduling models in the chapter,
although an interesting model has been developed recently, where
the number of the machines is not fixed, and the
algorithm is allowed to purchase machines. The model is
investigated in papers \cite{OnDoHe04,Dosa2010,Imreh2009,OnImrNo98}.
Problem \ref{onlf1} is based on \cite{OnSlTa85}, Problem \ref{onlf2} is based on \cite{OnDoGoSc01},
Problem \ref{onlf3} is based on \cite{OnYao80}, Problem \ref{onlf4} is based on \cite{OnImr01} and
Problem \ref{onlf5} is based on \cite{OnVes97}.
\end{fejmegj}\index{competive analysis|)}