Difference between revisions of "Graph Theory"

Graph
Field: Graph Theory
Image Created By: Awjin Ahn

Graph

This is a graph with six vertices and every pair of vertices connected by an edge. It is known as the complete graph K6.

Basic Description

A graph is a collection of two kinds of objects, called vertices and edges.[1] Each edge is defined as a set of two of the vertices. Every vertex in a graph can be represented with a point or small circle, while the edges are drawn as lines between two vertices. The set of vertices in the graph G is usually denoted as V(G) while the set of edges is E(G).

If there exists an edge between vertices u and v, written {u, v}, then u and v are adjacent to each other. Vertices u and v are the end points of the edge between them. An edge can go from one vertex and back to itself. This kind of edge is called a loop. There does not have to be just one edge between two vertices. There could be 2, or 6, or 1,239,960. Edges that have the same two end points are usually referred to as multiple edges.

A digraph and a weighted graph
A graph with an example of a loop (in green), double edges (in purple), and triple edges (in red).

Graphs have more diversity than just structures with simple vertices and edges. A directed graph or digraph is the same as a graph except the edges have a given direction. Edges in a digraph are drawn as arrows going from one vertex to another. A mixed graph is one that can have both regular edges and directed edges. All graphs and digraphs are also mixed graphs, just as a real number is also a complex number.

Another special type of graph is a weighted graph. In a weighted graph, each vertex or edge is given an associated number called a weight. Sometimes a weight on an edge or vertex can be just a label, which would differentiate each of the parts of the graph from each other. Sometimes a weighted graph has more applications to science or a programming than a regular graph does. See the "Why It's Interesting" section for more information on weighted graphs.

Inside a Graph

A map of Konigsberg with the bridges in green. Citizens tried to make a trip around the city taking every bridge once. Euler proved it impossible in 1736.

One of the historical problems in graph theory is the Seven Bridges of Königsberg problem. The city of Königsberg was divided by two rivers into four landmasses, connected by 7 bridges. Citizens tried to make a trip to cross every bridge just once. However, this task was finally proved impossible by mathematician Leonard Euler in 1736. This problem strongly influences the direction of graph theory. If we model the Bridge problem as a graph, each bridge would be an edge and each landmass would be a vertex. Finding a trip along all of the bridges is equivalent to finding a trip from vertex to vertex going across all of the edges once. In Euler's honor, this kind of 'trip' in the graph is called a Eulerian circuit. But this is only one kind of 'trip'. Any sort of 'trip' within the graph is called a walk.

The graph where each bridge is an edge, and each landmass is a vertex.

Formally a walk is defined as a series of vertices and edges, called (v1, e1, v2, e2, ... vn), where each edge ei is adjacent to vertices vi and vi + 1. In a walk, the vertices and edges can repeat themselves any number of times. It is just an arbitrary 'trip around the graph', so to speak. There are other more specific examples of walk besides a Eulerian circuit. Here is a list of special walks.

• A path is a walk with no repeated vertex. However it does not have to hit all of the vertices.
• A trail is a walk with no repeated edge.
• A walk is considered closed if the first and last vertices of the walk are the same.
• A cycle is closed path.
• A circuit is a closed trail.
• An Eulerian trail is a trail that uses all of the edges of a graph.
• An Eulerian circuit is a circuit that uses all of the edges of a graph.
• A Hamiltonian Path is a path that touches all of the vertices of a graph.
• A Hamiltonian Cycle is a cycle that touches all of the vertices of a graph.

Graph theory does not only study walks. It also studies the properties of individual vertices. One common property of a vertex is its degree. The vertex degree is the number of edges attached to the vertex. For a vertex u, the degree is usually represented as d(u).

The graph in the main image is known as a complete graph. A complete graph is a graph where there is one edge going between every possible pair of distinct vertices. The main image is a complete graph with six vertices.

Graphs have many other properties and even more mathematical possibilities. A few different facets of graph theory are presented below.

A More Mathematical Explanation

Sets are t [...]

Terminology and Properties

Representations of a Graph

Set Representation

Sets are the most mathematically precise way to define a graph. A graph G with n vertices and m edges is formally defined as a 3-tuple with two sets and a function.

$G=(V(G), E(G), \phi )$

The two sets are called the vertex set and the edge set. Those sets are written as;

$V(G)=\{v_{1},\dots,v_{n}\}$
$E(G)=\{e_{1},\dots,e_{m}\}$

The function, called an incidence function, takes the edge set to an unordered pair of (not necessarily different) members of the vertex set.

$\phi:E(G) \rightarrow \{ V(G),V(G)\}$

Taking the incidence function of an edge e, which has u and v as endpoints, is usually written like this:

$\phi (e)=uv$

The graph defined by the vertex set, edge set, and incidence function shown to the left.
where u and v are vertices in the graph. The function is usually defined edge by edge, not by a rule like other functions. As an example, the following vertex set, edge set, and incidence function yield the complete graph with 4 vertices.
$G=(V(G), E(G), \phi )$
$V(G)=\{a, b, c, d\}$
$E(G)=\{u, v, w, x, y, z\}$
$\begin{matrix} \phi (u)=ab & \phi (v)=bd & \phi (w)=cd \\ \phi (x)=ad & \phi (y)=bc & \phi (z)=ad \end{matrix}$

The vertex set cannot be empty, but the edge set could be empty.

In order to define a directed graph, just set notation cannot be used, because sets are ordered, and thus cannot show any direction. Digraphs need the edge to be defined an ordered pair using an 2-tuple, instead of a set. Thus sets, ordered pairs, and an incidence function can rigorously define graphs and digraphs. While sets are the best way to formally define a graph, there are other ways to represent a graph.

Visual Representation

A graph in visual form, called a flower snark graph.
The most famous way is visual representation. Each member of V(G) is drawn as a point or small circle, while each member of E(G) is drawn as a line between its two endpoints. If it is a directed edge, it is drawn as an arrow going from the first vertex to the second. Drawing a graph can give mathematicians an overall impression of the graph's structure and many properties that are not evident from the set notation, such as the existence of Hamiltonian cycles, or the degree of a vertex (defined below). This representation is good for seeing some overall features, but is lacking in particular mathematical notation or rigor.

Matrix Representation

(This section requires knowledge of some Linear Algebra.)

One more way to represent a graph is with a matrix. Mathematicians associate two different matrices with a graph. They are called the adjacency matrix A(G) and the incidence matrix M(G).

An adjacency matrix is formed from the edges between each of the vertices. To define the matrix formally, suppose there is a simple graph G where

$V(G)=\{v_{1},\dots,v_{n}\}$

The entries of the matrix are defined as:

$a_{ij}=\left\{ \begin{array}{rcl} 0, & \mbox{if} & v_iv_j \notin E(G) \\ 1, & \mbox{if} & v_iv_j \in E(G) \end{array}\right.$

where aij is the entry in the ith row and jth column of the adjacency matrix A(G). In less mathematical language, let's say that we make a table, with the column and row labels as the vertices v1 to vn. Each entry in the table, which is given by a column and a row, is thus labeled by a pair of two vertices. So, if those two vertices that make up the entry in the table are adjacent, then the entry has a 1 in it. If they aren't adjacent, then the entry has a 0 in it. This table of 0's and 1's thus becomes the matrix. It is important that these graphs are simple. If they were not, loops and multiple edges would be allowed in the graph. In this slightly more general case, aij would be the number of edges between vertices vi and vj.

There are some simple properties of adjacency matrices of graphs. In a simple graph, since an edge must go between two different vertices, u and v. So, u is adjacent to v, and v is adjacent to u. Translating this fact into the adjacency matrix, it is clear that:

$a_{ij}=a{ji}$

So the adjacency matrix for simple graphs is a symmetric matrix. Since there are no loops in a simple graph, the diagonal entries will all be 0. However, the adjacency matrix is still symmetric if there are loops. Since loops go from the vertex to itself, then corresponding entry in the matrix will be along the diagonal. Thus, it will not break the symmetry of the matrix.

In an incidence matrix, we have the above V(G) as well as

$E(G)=\{e_{1},\dots,e_{n}\}$

The entries of the matrix are then given by:

$m_{ij}=\left\{ \begin{array}{rcl} 0, & \mbox{if} & v_i \notin e_j \\ 1, & \mbox{if} & v_i \in e_j \end{array}\right.$

where mij is the entry in the ith row and jth column of the incidence matrix M(G). In fact, this is similar to making an A(G). However the vertices are not the labels for both the rows and the columns. Instead, in M(G), the edges make up the column labels while the vertices make up the row labels. Each entry of the table is labeled by a vertex and edge pair. If the vertex is an endpoint of that edge, then the entry is a 1. If not, it is a 0. If G were not simple here, then not much difference would be made, because an edge and vertex pair is unique.

A graph with 5 vertices and 5 edges.
As an example, consider the graph to the right. The matrices are given by the following:
$\begin{matrix} A(G)= & \begin{matrix} & a & b & c & d & e & \\ a & 0 & 0 & 1 & 1 & 0 & \\ b & 0 & 0 & 1 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 1 & 0 & 1 & 0 & 0 & \\ e & 0 & 1 & 0 & 0 & 0 & \end{matrix} & & M(G)= & \begin{matrix} & v & w & x & y & z & \\ a & 1 & 0 & 1 & 0 & 0 & \\ b & 0 & 1 & 0 & 0 & 1 & \\ c & 1 & 1 & 0 & 1 & 0 & \\ d & 0 & 0 & 1 & 1 & 0 & \\ e & 0 & 0 & 0 & 0 & 1 & \end{matrix} & \end{matrix}$

This matrix notation is quite useful, since matrices can mathematically manipulated and studied, while graphs in their visual form cannot. (Note: the letter labels are not part of the matrix.)

Graph Isomorphisms

A graph isomorphism, informally, is when two graphs are essentially the same graph. It may seem simple enough to see if two graphs are the same, but isomorphic graphs do not have to look identical to be the same graph. There are a few ways to tell whether two graphs are isomorphic to each other.

While the two graphs look very different, they are actually the same and are isomorphic to each other.
Visually, let's assume that graph G is isomorphic to graph H. This means that we can take the vertices of graph G, move them around while conserving the edge connections, and obtain H. They are essentially the same graph, but just look different. The image to the right shows two isomorphic graphs that do not look similar.

Making a formal definition of a graph isomorphism is not difficult in the set notation. All we need is a function from one vertex set to the other where arrangement of the vertices and edges is preserved. In mathematical notation, two graphs G and H are isomorphic if there exists,

$f: V(G)\rightarrow V(H)$

such that if u and v are adjacent vertices in G, then f(u) and f(v) are adjacent in H.

Thus, we have an idea of a graph isomorphism for the set and visual representations of a graph. However, the matrix notation is slightly trickier (and also requires knowledge of Linear Algebra).

For algebraists, making a graph into a matrix is a dream come true. But unfortunately, there is one snag of the matrix notation. The matrices A(G) and M(G) are not products of just the graph itself, but is a product of the graph and how the vertices and edges are labeled. For example, if A(G) matrix were not determined as a b c d e, but instead as c e a b d, then our matrix would be a different one! Let's call the graph with this new labeling scheme H.

$\begin{matrix} & c & e & a & b & d & \\ c & 0 & 0 & 1 & 1 & 1 & \\ e & 0 & 0 & 0 & 1 & 0 & \\ a & 1 & 0 & 0 & 0 & 1 & \\ b & 1 & 1 & 0 & 0 & 0 & \\ d & 1 & 0 & 1 & 0 & 0 & \end{matrix} = A(H)$

Even though we know G and H are essentially the same, or isomorphic, their adjacency matrices are different. In fact, there is a different matrix for each labeling of the graph. This appears to make the matrix notation useless, but linear algebra is up to the task of solving this problem.

This solution lies in how G and H are isomorphic. By definition, there must exist a function from V(G) to V(H) such that if u and v are adjacent in G, then f(u) and f(v) are adjacent in H. In our case, the function f is simply the relabeling. So,

$\begin{matrix} f(a) = c \\ f(b) = e \\ f(c) = a \\ f(d) = b \\ f(e) = d \end{matrix}$

Since we are just rearranging the labels, what we are really doing is essentially rearranging the columns and rows of the matrix! This is also clear from an examination of how we are making our table for the adjacency matrices of A(G) and A(H). Rearranging rows and columns is a well known phenomenon in linear algebra, as it is the same thing as multiplying the matrix by a matrix! But how do we know which matrix to use?

We can make the correct matrix from the f that forms the isomorphism in the first place. To form this matrix, we form another table. We put the vertices in their regular order a b c d e in the columns and c e a b d in the rows (this corresponds to f(a) f(b) f(c) f(d) f(e)). In the entry that corresponds to a pair of the same vertex, there is a 1. For example, in the entry that corresponds to a and a, there is a 1. Otherwise there is a 0. Let's call this matrix P. For our example, here we have P.

$\begin{matrix} & a & b & c & d & e & \\ c & 0 & 0 & 1 & 0 & 0 & \\ e & 0 & 0 & 0 & 0 & 1 & \\ a & 1 & 0 & 0 & 0 & 0 & \\ b & 0 & 1 & 0 & 0 & 0 & \\ d & 0 & 0 & 0 & 1 & 0 & \end{matrix} = P$

In linear algebra, this P is known as a permutation matrix. For more about those kinds of matrices, click here.

By design, this matrix rearranges the columns of a matrix, lets called them c1 c2 c3 c4 c5, to the order c3 c5 c1 c2 c4. The transpose of P would rearrange the rows in a similar way. Thus, in theory,

$P^{-1}A(H)P= A(G)$

Indeed,

$P^{-1}A(H)=\begin{pmatrix} 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{pmatrix}$

and

$P^{-1}A(H)P=\begin{pmatrix} 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \end{pmatrix}=A(G)$

This example shows a general principle of how one adjacency matrix can be transformed into another. As we already discussed, a relabeling creates a graph isomorphic to the original.

While this is only one specific case, this work with matrices gives us insight to the general problem. All the principles we used are still correct for any graph. The following facts are the most useful

• A relabeling of a graph G into a new graph H creates an isomorphic graph.
• Relabeling a graph amounts to switching around rows and columns of the adjacency matrix.
• Switching around rows and columns can be achieved by employing permutation matrices, created from the relabeling itself.

Here is the definition of a graph isomorphism in the terms of matrix notation. Any two graphs G and H are isomorphic if

$P^{-1}A(G)P=A(H)$

where P is the permutation matrix created from the relabeling.

Connected Graphs and Components

A connected graph.
A graph is called connected if for every pair of vertices u and v there is a path from u to v. More simply stated, a connected graph has only one "bunch" of vertices and edges. There are no vertices with no edges, which are called isolated points, and there are no disjoint "bunches" of vertices and edges.

If there is a path from u to v, u is connected to v. There is also an Equivalence Relation called the connection relation. If vertices u and v are connected to each other, then they are related by the connected relation. An equivalency class would be every vertex connected to some vertex.

The three components of the graph in three different colors.
By the properties of an equivalency relation, two equivalency classes have nothing in common, or they are the same.

This fact has consequences for graph theory. This shows us that a graph can be divided up into subgraphs that are either connected to a given vertex u or they are not connected. These groups are called components of a graph. Each component is a maximal connected subgraph of the graph G. These definitions have allowed us to discern between graphs that are one massive chunk or multiple smaller chunks.

A graph with all of its cut edges in green. If any of these are removed, then the graph will no longer be connected.
Sometimes in a graph, there are separate chunks, but they are connected by only one edge. An edge of this type a cut edge. Formally, a cut edge is an edge such that if it were removed, the graph would be a disconnected graph. This is just one example of how the idea of connectivity can help us define some of the visual features of graphs.

Connected graphs have numerous special properties. For a more detailed explanation, click here.

Vertices and Vertex Sets

Vertex Degree

As stated above, the vertex degree is the number of edges adjacent to a vertex. We can relate the number of edges with the degrees of the vertices in the following way:

[itex]\sum d_{i} = 2

Why It's Interesting

.,

Connections and Networks

The applications of graph theory are almost endless. Anything that involves objects and relations between them can be modeled as a graph. One famous example is a party, where people are represented as the vertices. If two people know each other, then there is an edge between them. If they do not know each other, then there is a dashed line between them. A study of graphs like this involves Ramsey's Theorem, but is commonly known as the Party Problem.

Another application is to computer connections, where each computer is a vertex and each connection is an edge. In computer systems, connections between computers sometimes fail. If this happens without a backup, there are large ramifications for the entire system. So, backup connections are needed. But how do people find particularly important connections? They can use graph theory to find cut edges or edges that are close to being a cut edge. Connectivity in graphs is an important concept in computer connection graphs.

A particularly famous example of a graph theory problem is the Traveling Salesman problem. Imagine a traveling salesman, (or salesperson in this day and age, I suppose) who must travel from city to city to make sales, starting and ending at the salesperson's home city. If the salesperson knows the cost of each of the flights between cities, how can the salesman make the cheapest trip to hit each necessary city and come back to the home city?

Just like in the previous problems, we can model the cities as vertices and the flights as the edges between them. But how do we model the cost of the flight? We can associate each edge with a number, called a weight. Each weight, in this case, would be the cost. Finding the cheapest trip would now correspond to finding the Hamiltonian cycle with the least weight. There are algorithms which can verify the cheapest trip rather quickly, but there are none that can find the cheapest trip quickly. One example of an algorithm for a similar problem is Dijkstra's Algorithm.[2] The algorithm gives the path with the least weight from one vertex to another. It is complex, but for an animation click here or see The Traveling Salesman Problem page.

Geometry

Another interesting application of graph theory is to geometry. Any shape in space can be visualized with a graph. A corner is represented by a vertex, and a side of a shape is an edge. This may seem to only apply to two dimensional shapes, but it actually applies to any dimensional shape. In order to make a two dimensional image of a higher dimensional shape, the shape needs to be projected onto two dimensions. This link shows graphs of the square, cube, tesseract, etc. Thus, graphs can model shapes in any dimension, which is useful for geometers.[3]

Teaching Materials

1. West, Douglas B., Introduction to Graph Theory. Prentice-Hall Inc.
2. Bondy, J. A. and U. S. R. Murty, Graph Theory with Applications North Holland, 1976: The Macmillan Press Ltd.
3. Weisstein, Eric W. "Hypercube Graph." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/HypercubeGraph.html