Graph Theory

From Math Images
Jump to: navigation, search
Inprogress.png
Graph
GraphTheory6.png
Field: Graph Theory
Image Created By: Awjin Ahn

Graph

This is a graph with six vertices and every pair of vertices connected by an edge. It is known as the complete graph K6.


Basic Description

A graph is a collection of two kinds of objects, called vertices and edges.[1] Each edge is defined as a set of two of the vertices. Every vertex in a graph can be represented with a point or small circle, while the edges are drawn as lines between two vertices. The set of vertices in the graph G is usually denoted as V(G) while the set of edges is E(G).

If there exists an edge between vertices u and v, written {u, v}, then u and v are adjacent to each other. Vertices u and v are the end points of the edge between them. An edge can go from one vertex and back to itself. This kind of edge is called a loop. There does not have to be just one edge between two vertices. There could be 2, or 6, or 1,239,960. Edges that have the same two end points are usually referred to as multiple edges.

A digraph and a weighted graph
A graph with an example of a loop (in green), double edges (in purple), and triple edges (in red).

Graphs have more diversity than just structures with simple vertices and edges. A directed graph or digraph is the same as a graph except the edges have a given direction. Edges in a digraph are drawn as arrows going from one vertex to another. A mixed graph is one that can have both regular edges and directed edges. All graphs and digraphs are also mixed graphs, just as a real number is also a complex number.

Another special type of graph is a weighted graph. In a weighted graph, each vertex or edge is given an associated number called a weight. Sometimes a weight on an edge or vertex can be just a label, which would differentiate each of the parts of the graph from each other. Sometimes a weighted graph has more applications to science or a programming than a regular graph does. See the "Why It's Interesting" section for more information on weighted graphs.

Inside a Graph

A map of Konigsberg with the bridges in green. Citizens tried to make a trip around the city taking every bridge once. Euler proved it impossible in 1736.

One of the historical problems in graph theory is the Seven Bridges of Königsberg problem. The city of Königsberg was divided by two rivers into four landmasses, connected by 7 bridges. Citizens tried to make a trip to cross every bridge just once. However, this task was finally proved impossible by mathematician Leonard Euler in 1736. This problem strongly influences the direction of graph theory. If we model the Bridge problem as a graph, each bridge would be an edge and each landmass would be a vertex. Finding a trip along all of the bridges is equivalent to finding a trip from vertex to vertex going across all of the edges once. In Euler's honor, this kind of 'trip' in the graph is called a Eulerian circuit. But this is only one kind of 'trip'. Any sort of 'trip' within the graph is called a walk.

The graph where each bridge is an edge, and each landmass is a vertex.

Formally a walk is defined as a series of vertices and edges, called (v1, e1, v2, e2, ... vn), where each edge ei is adjacent to vertices vi and vi + 1. In a walk, the vertices and edges can repeat themselves any number of times. It is just an arbitrary 'trip around the graph', so to speak. There are other more specific examples of walk besides a Eulerian circuit. Here is a list of special walks.

  • A path is a walk with no repeated vertex. However it does not have to hit all of the vertices.
  • A trail is a walk with no repeated edge.
  • A walk is considered closed if the first and last vertices of the walk are the same.
  • A cycle is closed path.
  • A circuit is a closed trail.
  • An Eulerian trail is a trail that uses all of the edges of a graph.
  • An Eulerian circuit is a circuit that uses all of the edges of a graph.
  • A Hamiltonian Path is a path that touches all of the vertices of a graph.
  • A Hamiltonian Cycle is a cycle that touches all of the vertices of a graph.

Graph theory does not only study walks. It also studies the properties of individual vertices. One common property of a vertex is its degree. The vertex degree is the number of edges attached to the vertex. For a vertex u, the degree is usually represented as d(u).

The graph in the main image is known as a complete graph. A complete graph is a graph where there is one edge going between every possible pair of distinct vertices. The main image is a complete graph with six vertices.

Graphs have many other properties and even more mathematical possibilities. A few different facets of graph theory are presented below.

A More Mathematical Explanation

Terminology and Properties

Representations of a Graph

Set Representation

Sets are t [...]

Terminology and Properties

Representations of a Graph

Set Representation

Sets are the most mathematically precise way to define a graph. A graph G with n vertices and m edges is formally defined as a 3-tuple with two sets and a function.

 G=(V(G), E(G), \phi )

The two sets are called the vertex set and the edge set. Those sets are written as;

V(G)=\{v_{1},\dots,v_{n}\}
E(G)=\{e_{1},\dots,e_{m}\}

The function, called an incidence function, takes the edge set to an unordered pair of (not necessarily different) members of the vertex set.

\phi:E(G) \rightarrow \{ V(G),V(G)\}

Taking the incidence function of an edge e, which has u and v as endpoints, is usually written like this:

 \phi (e)=uv

The graph defined by the vertex set, edge set, and incidence function shown to the left.
where u and v are vertices in the graph. The function is usually defined edge by edge, not by a rule like other functions. As an example, the following vertex set, edge set, and incidence function yield the complete graph with 4 vertices.
G=(V(G), E(G), \phi )
V(G)=\{a, b, c, d\}
E(G)=\{u, v, w, x, y, z\}
 \begin{matrix} \phi (u)=ab & \phi (v)=bd & \phi (w)=cd \\ \phi (x)=ad & \phi (y)=bc & \phi (z)=ad \end{matrix}

The vertex set cannot be empty, but the edge set could be empty.

In order to define a directed graph, just set notation cannot be used, because sets are ordered, and thus cannot show any direction. Digraphs need the edge to be defined an ordered pair using an 2-tuple, instead of a set. Thus sets, ordered pairs, and an incidence function can rigorously define graphs and digraphs. While sets are the best way to formally define a graph, there are other ways to represent a graph.

Visual Representation

A graph in visual form, called a flower snark graph.
The most famous way is visual representation. Each member of V(G) is drawn as a point or small circle, while each member of E(G) is drawn as a line between its two endpoints. If it is a directed edge, it is drawn as an arrow going from the first vertex to the second. Drawing a graph can give mathematicians an overall impression of the graph's structure and many properties that are not evident from the set notation, such as the existence of Hamiltonian cycles, or the degree of a vertex (defined below). This representation is good for seeing some overall features, but is lacking in particular mathematical notation or rigor.

Matrix Representation

(This section requires knowledge of some Linear Algebra.)

One more way to represent a graph is with a matrix. Mathematicians associate two different matrices with a graph. They are called the adjacency matrix A(G) and the incidence matrix M(G).

An adjacency matrix is formed from the edges between each of the vertices. To define the matrix formally, suppose there is a simple graph G where

V(G)=\{v_{1},\dots,v_{n}\}

The entries of the matrix are defined as:

a_{ij}=\left\{ \begin{array}{rcl} 0, & \mbox{if} & v_iv_j \notin E(G) \\ 1, & \mbox{if} & v_iv_j \in E(G) \end{array}\right.

where aij is the entry in the ith row and jth column of the adjacency matrix A(G). In less mathematical language, let's say that we make a table, with the column and row labels as the vertices v1 to vn. Each entry in the table, which is given by a column and a row, is thus labeled by a pair of two vertices. So, if those two vertices that make up the entry in the table are adjacent, then the entry has a 1 in it. If they aren't adjacent, then the entry has a 0 in it. This table of 0's and 1's thus becomes the matrix. It is important that these graphs are simple. If they were not, loops and multiple edges would be allowed in the graph. In this slightly more general case, aij would be the number of edges between vertices vi and vj.

There are some simple properties of adjacency matrices of graphs. In a simple graph, since an edge must go between two different vertices, u and v. So, u is adjacent to v, and v is adjacent to u. Translating this fact into the adjacency matrix, it is clear that:

a_{ij}=a{ji}

So the adjacency matrix for simple graphs is a symmetric matrix. Since there are no loops in a simple graph, the diagonal entries will all be 0. However, the adjacency matrix is still symmetric if there are loops. Since loops go from the vertex to itself, then corresponding entry in the matrix will be along the diagonal. Thus, it will not break the symmetry of the matrix.

In an incidence matrix, we have the above V(G) as well as

E(G)=\{e_{1},\dots,e_{n}\}

The entries of the matrix are then given by:

m_{ij}=\left\{ \begin{array}{rcl} 0, & \mbox{if} & v_i \notin e_j \\ 1, & \mbox{if} & v_i \in e_j \end{array}\right.

where mij is the entry in the ith row and jth column of the incidence matrix M(G). In fact, this is similar to making an A(G). However the vertices are not the labels for both the rows and the columns. Instead, in M(G), the edges make up the column labels while the vertices make up the row labels. Each entry of the table is labeled by a vertex and edge pair. If the vertex is an endpoint of that edge, then the entry is a 1. If not, it is a 0. If G were not simple here, then not much difference would be made, because an edge and vertex pair is unique.

A graph with 5 vertices and 5 edges.
As an example, consider the graph to the right. The matrices are given by the following:
\begin{matrix} A(G)= & \begin{matrix}  & a & b & c & d & e & \\ a & 0 & 0 & 1 & 1 & 0 & \\ b & 0 & 0 & 1 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 1 & 0 & 1 & 0 & 0 & \\ e & 0 & 1 & 0 & 0 & 0 & \end{matrix} & & M(G)= & \begin{matrix} & v & w & x & y & z & \\ a & 1 & 0 & 1 & 0 & 0 & \\ b & 0 & 1 & 0 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 0 & 0 & 1 & 1 & 0 &  \\ e & 0 & 0 & 0 & 0 & 1 & \end{matrix} & \end{matrix}

This matrix notation is quite useful, since matrices can be mathematically manipulated and studied, while graphs in their visual form cannot. (Note: the letter labels are not part of the matrix.)

Graph Isomorphisms

A graph isomorphism, informally, is when two graphs are essentially the same graph. It may seem simple enough to see if two graphs are the same, but isomorphic graphs do not have to look identical to be the same graph. There are a few ways to tell whether two graphs are isomorphic to each other.

While the two graphs look very different, they are actually the same and are isomorphic to each other.
Visually, let's assume that graph G is isomorphic to graph H. This means that we can take the vertices of graph G, move them around while conserving the edge connections, and obtain H. They are essentially the same graph, but just look different. The image to the right shows two isomorphic graphs that do not look similar.

Making a formal definition of a graph isomorphism is not difficult in the set notation. All we need is a function from one vertex set to the other where arrangement of the vertices and edges is preserved. In mathematical notation, two graphs G and H are isomorphic if there exists,

 f: V(G)\rightarrow V(H)

such that if u and v are adjacent vertices in G, then f(u) and f(v) are adjacent in H.

Thus, we have an idea of a graph isomorphism for the set and visual representations of a graph. However, the matrix notation is slightly trickier (and also requires knowledge of Linear Algebra).

For algebraists, making a graph into a matrix is a dream come true. But unfortunately, there is one snag of the matrix notation. The matrices A(G) and M(G) are not products of just the graph itself, but are products of the graph and how the vertices and edges are labeled. For example, if A(G) matrix were not determined as a b c d e, but instead as c e a b d, then our matrix would be a different one! Let's call the graph with this new labeling scheme H.

A(G)= \begin{matrix}  & c & e & a & b & d & \\ c & 0 & 0 & 1 & 1 & 1 & \\ e & 0 & 0 & 0 & 1 & 0 & \\ a & 1 & 0 & 0 & 0 & 1 & \\ b & 1 & 1 & 0 & 0 & 0 & \\ d & 1 & 0 & 1 & 0 & 0 & \end{matrix}

Even though we know G and H are essentially the same, or isomorphic, their adjacency matrices are different. In fact, there is a different matrix for each labeling of the graph. This appears to make the matrix notation useless, but linear algebra is up to the task of solving this problem.

This solution lies in how G and H are isomorphic. By definition, there must exist a function from V(G) to V(H) such that if u and v are adjacent in G, then f(u) and f(v) are adjacent in H. In our case, the function f is simply the relabeling. So,

 \begin{matrix} f(a) = c \\ f(b) = e \\ f(c) = a \\ f(d) = b \\ f(e) = d \end{matrix}

Since we are just rearranging the labels, what we are really doing is essentially rearranging the columns and rows of the matrix! This is also clear from an examination of how we are making our table for the adjacency matrices of A(G) and A(H). Rearranging rows and columns is a well known phenomenon in linear algebra, as it the same thing as multiplying the matrix by a matrix! But how do we know which matrix to use?

We can make the correct matrix from the f that forms the isomorphism in the first place. To form this matrix, we form another table. We put the vertices in their regular order a b c d e in the columns and c e a b d in the rows (this corresponds to f(a) f(b) f(c) f(d) f(e)). In the entry that corresponds to a pair of the same vertex, there is a 1. For example, in the entry that corresponds to a and a, there is a 1. Otherwise there is a 0. Let's call this matrix P. For our example, here we have:

P= \begin{matrix} & a & b & c & d & e & \\ c & 0 & 0 & 1 & 0 & 0 & \\ e & 0 & 0 & 0 & 0 & 1 & \\ a & 1 & 0 & 0 & 0 & 0 & \\ b & 0 & 1 & 0 & 0 & 0 & \\ d & 0 & 0 & 0 & 1 & 0 & \end{matrix}

In linear algebra, this P is known as a permutation matrix. For more about those kinds of matrices, click here.

By design, this matrix rearranges the columns of a matrix, lets called them c1 c2 c3 c4 c5, to the order c3 c5 c1 c2 c4. The transpose of P would rearrange the rows in a similar way. Thus, in theory,

P^{-1}A(H)P= A(G)

Indeed,

P^{-1}A(H)=\begin{pmatrix}  1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{pmatrix}

and

P^{-1}A(H)P=\begin{pmatrix} 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \end{pmatrix}=A(G)

This example shows a general principle of how one adjacency matrix can be transformed into another. As we already discussed, a relabeling creates an graph isomorphic to the original.

While this is only one specific case, this work with matrices gives us insight to the general problem. All the principles we used are still correct for any graph. The following facts are the most useful

  • A relabeling of a graph G into a new graph H creates an isomorphic graph.
  • Relabeling a graph amounts to switching around rows and columns of the adjacency matrix.
  • Switching around rows and columns can be achieved by employing permutation matrices, created from the relabeling itself.

Here is the definition of a graph isomorphism in the terms of matrix notation. Any two graphs G and H are isomorphic if

P^{-1}A(G)P=A(H)

where P is the permutation matrix created from the relabeling.

Connected Graphs and Components

A connected graph.
A graph is called connected if for every pair of vertices u and v there is a path from u to v. More simply stated, a connected graph has only one "bunch" of vertices and edges. There are no vertices without edges, which are called isolated points, and there are no disjoint "bunches" of vertices and edges.

If there is a path from u to v, u is connected to v. There is also an Equivalence Relation called the connection relation. If vertices u and v are connected to each other, then they are related by the connected relation. An equivalency class would be every vertex connected to some vertex.

The three components of the graph in three different colors.
By the properties of an equivalency relation, two equivalency classes have either nothing in common or they are the same.

This fact has consequences for graph theory. This shows us that a graph can be divided up into subgraphs that are either connected to a given vertex u or are not connected. These groups are called components of a graph. Each component is a maximal connected subgraph of the graph G. These definitions have allowed us to discern between graphs that are one massive chunk or multiple smaller chunks.

A graph with all of its cut edges in green. If any of these are removed, then the graph will no longer be connected.
Sometimes in a graph, there are separate chunks, but they are connected by only one edge. An edge of this type a cut edge. Formally, a cut edge is an edge such that if it were removed, the graph would be a disconnected graph. This is just one example of how the idea of connectivity can help us define some of the visual features of graphs.

Connected graphs have numerous special properties. For a more detailed explanation, click here.

Vertices and Vertex Sets

Vertex Degree

As stated above, the vertex degree is the number of edges adjacent to a vertex. We can relate the number of edges with the degrees of the vertices in the following way:

\sum d_{i} = 2|E(G)|

This is because the sum of the degrees counts the number of edges coming off of each point. But since edges border two points, we are counting each edge twice. Thus, the equation holds.

Vertex degree is also an important concept in other branches of graph theory. For example, let us consider adjacency matrices and incidence matrices. The adjacency matrix A(G) is defined by which vertex is adjacent to which other vertices. Each row and column corresponds to a vertex. So, the sum of the entries of the column or row corresponds to the degree of the vertex that it represents. Consider the adjacency matrix from the previous section:

\begin{matrix}  & a & b & c & d & e & \\ a & 0 & 0 & 1 & 1 & 0 & \\ b & 0 & 0 & 1 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 1 & 0 & 1 & 0 & 0 & \\ e & 0 & 1 & 0 & 0 & 0 & \end{matrix} = A(G)

The sum of the third row, which corresponds to vertex c, is 1 + 1 + 0 + 1 + 0 = 3. Each of the 1's in the sum corresponds to another vertex, so vertex c is adjacent to 3 other vertices. If G is a simple graph, which in this case it is, there is only one edge that goes to each other vertex. So, 3 edges must also be adjacent to c. Thus the degree of c is 3.

If G were not a simple graph, there is away to get the degree of a vertex from a matrix. Indeed, we can actually use the incidence matrix.

Since each column corresponds to an edge, the sum of the row entries would corresponds to the total number of edges adjacent to the the given vertex of the row. Consider the incidence matrix from the previous section.

 \begin{matrix} & v & w & x & y & z & \\ a & 1 & 0 & 1 & 0 & 0 & \\ b & 0 & 1 & 0 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 0 & 0 & 1 & 1 & 0 &  \\ e & 0 & 0 & 0 & 0 & 1 & \end{matrix} = M(G)

Let's consider the 4th row. The sum of the entries is 0 + 0 + 1 + 1 + 0 = 2. In this row, which corresponds to d, each of the 1's represents 1 edge that is adjacent to d. Thus, by definition, the degree of d is 2. In general, the degree of a vertex is equal to the sum of the row entries which correspond to that vertex.

Cliques and Independent Sets

A graph with an example of a 3-vertex clique (orange), one 4-vertex clique (ocean color), and a 5-vertex clique (purple). The larger cliques are also made up of 3-vertex cliques.
We can also use connectivity to study vertices. A clique is a subset of vertices such that each vertex is connected to every other one. This is equivalent to saying that the vertices and edges in a clique make a subgraph that is complete.[2]

Cliques can be very simple, a subset of vertices that only has one, two, or three vertices in it. One vertex by itself is connected to itself, so one vertex is a clique. If a vertex subset has two vertices, and they have an edge between them, then that also forms a simple clique. A clique with three vertices is a closed triangle. However, beyond this, they can be very difficult to visualize in general. 4-vertex cliques can appear as quadrilaterals with the two diagonals connected, or a triangle with a vertex in the middle that is connected to all of the vertices at the corners of a triangle.

Cliques are not usually studied by themselves, but are studied alongside independent sets. An Independent Set is a set of vertices such that no two of them are connected. This idea is almost complementary to the idea of a clique. Consider a graph with lots of edges with a few vertices. Intuitively, this kind of graph would have very large cliques since there are many edges. But it should also have small independent sets, since so many of the vertices are connected. Conversely, consider a graph with many vertices but sparse edges. There should be a large independent set since most of the vertices would be unconnected. However, there would be few cliques since there are few connected vertices. Thus, larger and more numerous cliques usually means smaller and fewer independent sets. For more on the relationship between these two concepts, see The Party Problem (Ramsey's Theorem).

A bipartite graph with 8 vertices. This graph can be split up into two independent sets of 4 vertices each. The sets are on either side of the graph.
The idea of an independent set can help define more types of graphs. A graph that can be divided into two independent sets is called a bipartite graph. Partitioning the vertices into two sets is called a bipartition. Bipartite graphs have many applications, as stated in the Why It's Interesting section.

Trees

As defined above, a cycle is a series of edges and vertices such that no vertex is repeated and the first vertex is the same as the last. Informally, it is a trip around a group that doesn't repeat itself and comes back the starting vertex. A graph without any cycles is called an acyclic graph. However, this could just be a bunch of vertices with no edges between them. Graphs like that are not interesting at all! However, if we add another restriction, we get a new type of graph.

A tree is an acyclic graph that is connected. Informally, it's a graph with no closed trails in it. If someone were to try and take a trip in a tree, the only way to get back to his starting vertex is to go back the way he or she came. If a mathematician wanted to prove this fact rigorously, it would involve a simple proof by contradiction. If there were another way to get back to the starting vertex, then there would be a cycle in the graph. This contradicts the assumption. If there were a cycle in a tree, it wouldn't be a tree. Another way to state this is that every path between two vertices in a tree is a unique path.

Here are some examples of trees. Visually, they look like trees (hence the name), bushes, or tournament brackets.

(Fig. 1) Three different trees. The number of nodes is just 1 greater than the number of edges.

When mathematicians analyze new types of graphs, they look at the basic properties of the graphs and try and find relations between them. Two of these properties are the number of vertices, sometimes denoted |V(G)|, and the number of edges, |E(G)|. For the easiness, let |V(G)| = m and |E(G)| = n. For the graphs in Figure 1, we have

\begin{matrix} & m & n \\ \text{left} & 9 & 8 \\ \text{center} & 7 & 6 \\ \text{right} & 17 & 16 \end{matrix}

In all of these cases, it is clear that m = n - 1. Is this the case for every tree?

The proof for this is given below.

Proof

We are trying to prove that if T is a tree with m edges and n vertices, then m = n - 1

This will be a proof by Induction. Thus we start with the base case for n = 1. In this case, since there is only one node, there are no edges, m = 0. Thus, m = n - 1, and the result holds for n = 1.

Now we shall state the Induction Hypothesis. If a tree with n vertices, such that 1 ≤ n < k, have m = n - 1, then all trees with k vertices have m = n - 1. We will use the first part of the statement to imply the second.

Let T be a tree with n ≥ 2 and let uv be an edge in T. Now, consider the tree without the edge uv, called T - uv. In this new graph, there is no path from u to v, since every path in a tree between two vertices is unique. Thus T - uv is a disconnected graph, and it has two components that are also trees, since they are acyclic. Let those components be called G and H. Since both G and H have fewer than k vertices, since they are components, then the Induction Hypothesis applies to them.

Now we consider the relationships between the number of edges and vertices of the tree and these new components. We will use the |V(G)| notation from above.

Since no vertices were removed to make the new graph T - uv, the following equations hold:

|V(G)|+|V(H)|=|V(T-uv)|=|V(T)| (1)

However, since 1 edge was removed, this equation also applies:

|E(G)|+|E(H)|=|E(T-uv)|=|E(T)|-1

Rearranging this second equation slightly yields

|E(T)|=|E(G)|+|E(H)|+1 (2)

Since the Induction Hypothesis applies to these two components G and H, the following equations also hold.

 |E(G)|=|V(G)|-1 (3)

 |E(H)|=|V(H)|-1 (4)

Now, these are all the equations needed to prove the Induction Hypothesis. Substituting equations (3) and (4) into equation (2), will yield:

|E(T)|=(|V(G)|-1)+(|V(H)|-1)+1

And simplifying:

|E(T)|=|V(G)|+|V(H)|-1

Now, substitute equation (1) for |V(G)| and |V(H)|.

|E(T)|=|V(T)|-1

Thus, the Induction Hypothesis, which assumes the case for 1 ≤ n < k, implies the case for n = k. Thus, the theorem is proven.


Why It's Interesting

.,

Connections and Networks

The applications of graph theory are almost endless. Anything that involves objects and relations between them can be modeled as a graph. One famous example is a party, where people are represented as the vertices. If two people know each other, then there is an edge between them. If they do not know each other, then there is a dashed line between them. A study of graphs like this involves Ramsey's Theorem, but is commonly known as the Party Problem.

Another application is to computer connections, where each computer is a vertex and each connection is an edge. In computer systems, connections between computers sometimes fail. If this happens without a backup, there are large ramifications for the entire system. So, backup connections are needed. But how do people find particularly important connections? They can use graph theory to find cut edges or edges that are close to being a cut edge. Connectivity in graphs is an important concept in computer connections.

A particularly famous example of a graph theory problem is the Traveling Salesman problem. Imagine a traveling salesman, (or salesperson in this day and age, I suppose) who must travel from city to city to make sales, starting and ending at the salesperson's home city. If the salesperson knows the cost of each of the flights between cities, how can the salesman make the cheapest trip to hit each necessary city and come back to the home city?

Just like in the previous problems, we can model the cities as vertices and the flights as the edges between them. But how do we model the cost of the flight? We can use a weighted graph. Each weight, in this case, would be the cost. Finding the cheapest trip would now correspond to finding the Hamiltonian cycle with the least weight. There are algorithms which can verify the cheapest trip rather quickly, but there are none that can find the cheapest trip quickly. One example of an algorithm for a similar problem is Dijkstra's Algorithm.[3] The algorithm gives the path with the least weight from one vertex to another. It is complex, but for an animation click here or see The Traveling Salesman Problem page.

Geometry

Another interesting application of graph theory is to geometry. Any shape in space can be visualized with a graph. A corner is represented by a vertex, and a side of a shape is an edge. This may seem to only apply to two dimensional shapes, but it actually applies to any dimensional shape. In order to make a two dimensional image of a higher dimensional shape, the shape needs to be projected onto two dimensions. The graphs for a square, cube, tesseract (4-dimensional cube), and the 5-dimensional cube are shown below. Thus, graphs can model shapes in any dimension, which is useful for geometers.[4]

Going from left to right, the graphs for a square, cube, tesseract, and 5-dimensional cube. The last three are higher dimensional shapes being projected onto the 2-dimensional plane.



Teaching Materials

There are currently no teaching materials for this page. Add teaching materials.









If you are able, please consider adding to or editing this page!


Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.





  1. West, Douglas B., Introduction to Graph Theory. Prentice-Hall Inc.
  2. Weisstein, Eric W. "Clique." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Clique.html
  3. Bondy, J. A. and U. S. R. Murty, Graph Theory with Applications North Holland, 1976: The Macmillan Press Ltd.
  4. Weisstein, Eric W. "Hypercube Graph." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/HypercubeGraph.html