related-work.tex 2.75 KB
Newer Older
Michael Keller's avatar
Michael Keller committed
1
2
\chapter{Related Work}

Michael Keller's avatar
Michael Keller committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
We are only aware of one other individual
who has worked on this problem: Dr. Daniel Arnold
from the University of Bern. He was kind
enough to share some of the progress he made
with us. We summarize the main two methods
he developed below.

The first method is hill climbing. Here
Dr. Arnold proposes a method where the following
steps are repeated many times. We start out
with an initial field configuration. Then we:
\begin{enumerate}
    \item Propose swapping two pixels
    \item If the swapping would be beneficial we perform it, otherwise we don't
    \item Go back to step 1
\end{enumerate}
The program terminates when no more beneficial swaps
can be performed or it have reached the maximum number
of iterations. This method can guarantee that a locally
optimal solution was found, which means no single
swap would improve the score. However, there are many
local optima and thus it is rather unlikely that the
global optimum is found with this method. A possible
approach for tackling this issue is to rerun the
algorithm with different initial field configurations.
In practice this is unfortunately not enough to
fix the issue on large fields.

The second method is simulated annealing. This
method is similar to the hill climbing approach
with one key difference, we sometimes accept
configurations that are worse with a certain
probability $p$. The process works as follows.
Pick an initial field configuration. Then:
\begin{enumerate}
    \item Propose swapping two pixels
    \item Calculate the current probability $p$ (see below)
    \item If the swapping would be beneficial perform it,
    otherwise swap with probability $p$
    \item Go back to step 1
\end{enumerate}
where the current swap probability is determined by
a Boltzmann distribution:
\begin{align*}
    T_i &= T_0 \cdot K^i\\
    \Delta H &= \text{Score change caused by proposed swap}\\
    p &= e^{-\Delta H / T}
\end{align*}
where $T_0$ and $K$ between $0$ and $1$
are constants he determines experimentally. The smaller
$K$ is, the faster the method converges. The idea
with this method is that the algorithm is allowed
so escape local minima in the early stages of
the algorithm. If the algorithm accepts a change
that makes the score worse it might be able to
get "unstuck" from the local optimum it is in
and find a solution closer to the global of even
the global optimum.

The simulated annealing approach appears
to work quite well. It by far outperforms
the hill climbing approach. We will elaborate
further on the specific results of this method
in the Benchmark problems section.

Finally Dr. Arnold also considers other practical
aspects of pixel farming, such as what crops
to best replace other with after a cycle
of farming is completed. We however won't consider
these issues in this paper.