# Cyclomatic Complexity

In this one we’ll delve into the subject of how to evaluate code without getting too much into the syntax or semantics of a particular language.

## Cyclomatic Complexity

This is a measure used to indicate the number of independent code paths through a (let’s say) function. This gives us a good estimation as to how complex a function (or program) might be. The cyclomatic complexity is calculated via the control flow graph generated for a particular function. In this graph an edge connects two commands such that one might be executed after the other.

Thus common sequence flows can be visualised as such:

Without going into too much graph theory or mathematical modelling, let’s say we have a function as such:

```
if(c1())
{
f1();
}
else
{
f2();
}
if(c2())
{
f3();
}
else
{
f4();
}
```

The CFG of this would look like, where the red node is the entry point and the blue node is the end point:

To calculate the cyclomatic complexity we use the following formula:

v(G)=E-N+ 2

Where,

E=> Number of edges of the graph

N=> Number of nodes of the graph

Note: This formula is slightly different when taking into account *externally connected node*. Checkout the Wikipedia Page for an indepth explanation.

Thus the cyclomatic complexity for the above graph (**G**) can be calculated by:

**E**= 8**N**= 7**v(G)**= 8 - 7 + 2**v(G)**= 3

## What Do I Do With This Information?

Cyclomatic complexity is directly proportional to the number of defects. The higher the **v(G)**, the more potential problems you might run into. This is problematic since this introduces a higher risk of bugs and can be very costly in terms of software development resources. This metric is similar to lines of code, whereby the more lines of code you have, the more potential bugs there will be.

**v(G)** also gives us some idea of the cohesion and the *structured* quality of a program. Modules with higher complexity have lower cohesion, which is an important engineering principle to maintain. Cohesion, not to be confused with coupling, improves a programs resilience and reduces potential defects.

Testing is also affected by code complexity. Generally, there should be the same number of tests as the **v(G)** of that particular function. Higher complexity requires more tests while lower complexity can be easily tested and is generally more maintainable.

## Is v(G) Always Right?

Not always. Code metrics are of course not always reliable and code complexity is one of them. A simple and understandable method may very well have a high complexity value and could potentally be calculated differently by various tools. Additionally, code complexity may not be the best tool for predicting software characteristics. Reducing cyclomatic complexity doesn’t always lead to less defects or error prone code.

## Guidelines

Although, cyclomatic complexity might not be *that* great at predicting things, it is often a good indicator of how to test, when to refactor or general software quality. As a general guideline, always try and keep the **v(G)** of a function to the maximum value of **10**. This is also recommended by ISO standards for Road Vehicles - Functional Safety & Medical Device Software.

## References

Side Note: I wanted to add a lot more references much like an academic paper to show that I’m not really making this up. But most of this information is out there, just a Google (Scholar) search away.