# Graph Theory 101

## A fun introduction to graph theory — what is it and why is it relevant to us today?

## A brief origin story

One of the most **famous cities in mathematics **is the medieval German city of **Königsberg**, which lay on both sides of the Pregel river.

In the center of the river, there lay *2 islands* that were connected to each other and the river banks by *7 bridges.*

Something like this:

There was a mathematician, **Carl Gottlieb Ehler**, who later become the mayor of a nearby town, that was absolutely obsessed with finding out a solution for this problem:

Is there a route that would allow someone to cross all 7 bridges without crossing any of them more than once?

Try solving it… (I’ll give you a couple of seconds ⏲️)

Spoiler alert: you can’t. There is no solution to this problem!

But, attempting to *explain why it can’t be solved* resulted in the invention of a new field of mathematics by fellow mathematician **Leonard Euler**. It was a new kind of geometry, termed as *‘the geometry of position’*. Today, this is known as the field of **graph theory**.

You can thank this genius for creating a domain of mathematics that has endless applications and makes up the foundation for some incredibly useful and important algorithms in computer science!

A super fascinating origin story, if you ask me!

## Now, onto the fun stuff. This article has two parts:

**Intro to graph theory**: all the basic jargon that you need to know and some common types of graphs + different ways to represent graphs**Applications of graph theory**: 2 basic algorithms briefly explained (breadth for search and depth for search) + other cool applications!

# Part 1: The essence of a graph

Let’s take a group of individuals that are connected to some social media platforms, like LinkedIn. If we map out all the connections between these people, it could look something like this.

Each person is a vertex of the graph. If they are connected on LinkedIn, then there’s a line connecting them.

That’s the essence of a graph — a collection of vertices, V, and edges (the lines connecting the people in the social network), E.

Note: vertices can also be referred to as ‘nodes’.

If we take the graph above and ‘abstract it’, it looks like this:

Some key terms + definitions:

**Incident**: x is incident to A and E. Any edge is incident to 2 vertices.**Adjacent**: G is adjacent to D, F, and H because there is some edge going from G to all these other vertices. Adjacent vertices are connected by an edge.**Isolated**: B is isolated because it’s not connected to any other vertices (just like how poor Bob doesn’t have any connections on LinkedIn…)**Degree**: A has a degree of 2, B has a degree of 0, and F has a degree of 4. Very simple, the degree is just the number of connections a vertex has.

## One way versus two-way streets: directed versus undirected

There are** two basic types **of graphs: directed and undirected. You can think of directed graphs as one-way streets, meaning you can only travel from one vertex to the other in one direction. Undirected graphs are like normal streets where you can travel in both directions.

In other words, in **directed graphs**, you can only get the final ‘destination’ (vertex) through one path. However, for **undirected graphs**, you can walk in circles (cycles) — you’re free to choose any path you like and you can infinitely loop around the same path.

If we represent an edge as edge = (vertex, vertex) and we use vertices A and E as examples:

For a undirected graph, edge = {A, E} = {E, A}. The same does not hold true for undirected graphs because there is a direction, meaning edge = (A, E).

## Short versus long streets: weighted versus unweighted

On top of directed versus undirected, graphs can also be weighted. The **weight **can be representative of a lot of different things, but for our illustration, let’s think of the weight as a value representing the length of the edge.

It’s either 1 (shorter) or 2 (longer).

You can see that the weighted version changes the ‘shape’ of the graph but the *underlying connections (edges) are still the same*.

You can think of it as the *degree **of connection — like how you have 1st and 2nd degree connections*, you have edge weights of 1 or 2. An edge weight of 1 is a 1st degree connection because the two individuals are more closely connected than a 2nd degree connection (which is an edge weight of 2).

Hopefully, you’re starting to see the implications behind these networks. If not, here are two more social network graph examples to kickstart your imagination!

Two more important concepts before we move onto applications!

## Different ways to represent these graphs

To represent these graphs mathematically as some set (which is important if we want to feed the graph into some arbitrary algorithm), then we need to represent it **linearly**.

Essentially, it’s taking a *non-linear data structure* (the visual representation) and mapping it to a *linear data structure* (e.g. a list) so our computer can interpret the graph.

Some common methods of representing graphs include:

**1. Edge lists**: unordered list of edges. Uses *triplet notation* **(u, v, w) **where the weight from vertex v to vertex w is u. If there is no weight, then the edge is simply represented by **(v, w)**

- This is practical, but not efficient for really dense graphs (meaning a lot of vertices that are all connected). It’s not used commonly because of its lack of structure. Its time complexity is
**O(E).**

**2. Adjacency lists**: lists out the connections for each vertex.

- Again, not the best for dense graphs but its commonly used because of its easy to read structure. Its time complexity is
**O(E).**

**3. Adjacency matrix**:

- Highly space efficient, meaning it’s a good method for encoding dense graphs. Commonly used and very practical. Its time complexity is
**O(V²).**

**Here’s a simple example with an unweighted graph:**

# Part 2: Why is all of this is even relevant to you?

In a nutshell, graph theory is the study of *relationships *or *how things are connected*. With a set of vertices and edges we can **abstract anything** from biological networks to computer data and then apply graph theory to** extract unique insight** by quantifying and simplifying the many parts of these dynamic systems.

Graph theory is just abstraction of the connectedness of information — everything, even the most complex networks, can be reduced into some form of a graph (albeit, a complex one).

In fact, graph theory is *useful for almost any problem* that falls within the scope of **arrangement**, **optimization**, **networking**, **combinatorics**, **matching**, and **operational **problems.

That’s a lot of problems that we can now simplify and solve!

## A quick rundown of some applications

**Computer science/networks **— graphs are used:

- For the study of algorithms like Dijkstra’s algorithm.
- To define the flow of computation or information.
- To represent data organization.
- To find the shortest path in a road or network — think Google Maps!
- To model and build secure networks.

**Electrical engineering**:

- Graph theory is used in designing or circuit connections.

**Linguistics **— graphs are used:

- For parsing of a language tree and/or grammar of a language tree.
- Semantic networks (which can be represented as graphs), a lexical database of English, which are used/studied in NLP!

**Physics and chemistry **— graphs are used to:

- Model and study molecules: the 3D structure of complicated atomic structures can be quantitatively modelled and studied with graphs.
- Study statistical physics (ie, local connections between interacting parts of a system or the dynamics of a physical process in such systems).

**Biology:**

- Nodes in biological networks can represent genes, proteins, or metabolites, and edges connecting these nodes indicate functional, physical or chemical interactions.
- Graphs can be used to model PPI networks (protein-protein interactions) or to characterize drug-drug target relationships and/or interactions!

## Two common algorithms used (based on graph theory)

While these algorithms are not useful in isolation, the **depth-first search (DFS) **and **breadth-first search (BFS) **algorithms are used as a building block in other algorithms.

Both these algorithms have a time complexity of **O(V+E)** where V is the number of nodes/vertices and E is the number of edges.

**DFS in a nutshell**

The DFS uses stack — either our own or the call stack (via recursion). The DFS algorithm uses *LIFO* (last in = first out) and contains backtracking. It’s a complete search that exhausts all possible paths and goes *deep*.

DFS searches a graph without regard to which edge it takes next until it cannot go any further, backtracks, and then continues such that it can’t revisit nodes that have already been explored.

Let’s say this is our graph:

Here’s how to implement DFS:

**def** dfs_iterative(graph, start):

stack = [start]

path = []

**while** stack:

vertex = stack.pop()

**if** vertex **in** path:

**continue**

path.append(vertex)

**for** neighbor **in** graph[vertex]:

stack.append(neighbor)

**return** path

adjacency_list = {1: [2],

2: [3, 4, 5],

3: [2, 4],

4: [2, 3],

5: [2, 6],

6: [4, 5, 7],

7: [6, 8],

8: [7]

}

print(dfs_iterative(adjacency_list, 1))

The output is this:

`[1, 2, 5, 6, 7, 8, 4, 3]`

Here’s the algorithm in action:

Now, for BFS…

**BFS in a nutshell**

Breadth-First Search (BFS) iterates with a queue. The BFS algorithm uses *FIFO* (first in = first out) and does not contain backtracking. It checks if paths exist between nodes and goes *wide*.

It’s particularly useful for finding the *shortest path on unweighted graphs.*

Let’s say this is our graph:

Here’s how to implement BFS:

**def** bfs_iterative(graph, start):

queue = [start]

path = []

**while** queue:

vertex = queue.pop(0)

**if** vertex **in** path:

**continue**

path.append(vertex)

**for** neighbor **in** graph[vertex]:

queue.append(neighbor)

**return** path

adjacency_list = {1: [2],

2: [3, 4, 5],

3: [2, 4],

4: [2, 3],

5: [2, 6],

6: [4, 5, 7],

7: [6, 8],

8: [7]

}

print(bfs_iterative(adjacency_list, 1))

The output is this:

`[1, 2, 3, 4, 5, 6, 7, 8]`

Here’s the algorithm in action:

And that’s all!