Reconstructing Phylogenies: Phenetic methods

There are basically two methodologies by which phylogenies can be reconstructed: phenetic and cladistic. The latter is based on Hennigian systematics – i.e. characters (apomorphies, plesiomorphies, etc.), while the former is based on distance. I’ll explain what this means in a bit. Cladistics is the norm for morphological systematics, where we operate (well, mostly) under the parsimony principle – the less character changes required, the more likely it is that our tree is true. Trees made by cladistic methods are referred to as cladograms; phenetic trees are phenograms. However, in colloquial usage, phenogram is barely heard and cladogram has come to dominate, even when phenetic methods were used for tree construction.

While you might hear that phenetics is a naughty word, phenetic methods are still widely in use as estimation, for classification (at the species level, it’s hard to find autapomorphies most of the time), and in molecular systematics. It can also be used in linguistics, although the equivalent of cladistics there is similarly more accurate. Phenetics has major problems for phylogenetic reconstruction, mostly stemming from the differing algorithms, their assumptions and their calculation.

That said, there are some undeniable advatages to phenetics, which is why it’s still around. It allows us to combine morphological and molecular matrices together (a distance matrix is a distance matrix, no matter what the distances are between). It is also less skill-intensive – anyone can measure the width of an ant head, but not many can recognise the autapomorphy that separates the Formicinae (which is what one would need when working cladistically). Phenetics is thus arguably easier for field guides and identification keys.

This post will introduce phenetic methods for reconstructing phylogenies, because I notice that some use them while calling them cladistic – molecular systematists are especially guilty of this, which annoys me to no end. That is not to say that all molecular systematics is phenetic. What this post will do is outline the most popular methods that are phenetic so you can recognise them when you see them.

What is meant by distance-based is that in phenetics, the number of differences between data sets are quantified. In molecular systematics, this can be the differences in a gene or protein sequence. In morphological phenetics (a.k.a. numerical taxonomy), measurements are taken and normalised (so they are all statistically equal) – this is another big problem for phenetics: one cannot reconstruct a tree unless one has specimens from the same life cycle stage. A distance matrix is then computed, based either on Euclidean distance or on a correlation coefficient. Two groups independently converged on the idea: Fitch & Margoliash (1967) and Cavalli-Sforza & Edwards (1967).

This matrix is then run through the algorithm you chose. These algorithms fall into two categories: those based on clusters and those based on optimality.

Cluster-Based

The cluster-based methods are the simplest. What they do is initially combine everything into pairs based on their similarities, forming the tips of the tree. The similarity matrices for the pairings are recalculated, and another round of grouping takes place, forming deeper nodes in your tree. Eventually, when everything is combined into one group, you have your final tree. The two most encountered algorithms are the unweighted pair group method, arithmetic average (UPGMA) and the neighbor-joining method (NJ). Others, historically less popular, include the single-link method, weighted pair-group method, arithmetic average (WPGMA), centroid average (WPGMC) and Spearman’s average (WPGMS).

NJ is the most popular to use for “quick-and-dirty” analyses – the results are usually astoundingly accurate and the computation is quick. NJ works exactly as I described above – it “joins neighbours” together, where neighbours are defined as the most similar units; see Atteson (1999) for the mathematics behind this. It was best developed by Saitou & Nei (1987). The key to NJ is the NJ criterion, which decides how exactly these clusters are decided. A significant modification was done by Bruno et al. (2000), who added the possibility of adding weights to units, to help in dealing with convergences (and generally remove such cases where an algorithm will commit a stupid mistake). The modified method is creatively called weighted neighbor joining (WEIGHBOR).

UPGMA (Sokal & Michener, 1958), also called the average linkage method by Sokal & Sneath (1963), is the original, and was popular, but is now rightly ignored. The reason is that unless the distances in the matrix came about very regularly, UPGMA gives faulty results. In other words, it implicitly needs a perfect molecular clock – and as my readers know, such a thing does not exist. In my experience, what this leads to is not so much a wrong tree topology, but grave errors in the distance to the divergence point. Thankfully, it is easy to see when the tree is correct: if the two branch lengths from a node are of equal length, the tree is accurate, since equal length means clocklike evolution (Felsenstein, 2004). As the name suggests, averages are the way in which units are clustered together. In each grouping, the averages are calculated, and those groups with averages closest to each other are lumped together. Repeat until all is in one group.

Optimality-Based

Optimality-based methods are more complex and computationally-demanding. While in the cluster-based methodologies, only one tree is produced, optimality-based algorithms spit out many trees. The distances represented by each branch are the important metric, the evolutionary distance. They are compared to the “actual evolutionary distances” (which, most of the time, are dictated from a corroborating source of data); the comparison is done using a variant of the least squares method.

It’s important to understand the differences between the various least squares frameworks.

  • Ordinary least squares (OLS) is applicable when the evolutionary distances are independent fo each other and are equally variable. In other words, when you have an ideal tree.
  • Weighted least squares is applicable when the evolutionary distances are independent, but there is variation in branch lengths.
  • Generalised least squares has no such restrictions.

It should be obvious which of these is the one used in phylogenetics. Branch lengths are never constant – unless you assume a perfect molecular clock, so OLS is inappropriate. Because we automatically assume common descent, there is a always a degree of dependence in the evolutionary distances, so weighted least squares is also inappropriate, leaving the generalised least squares as the one to use (unless there are special circumstances).

That said, it may be computationally advantageous to use OLS or weighted least squares; for example, ME, described below, often produces completely reliable trees when used with OLS (Rzhetsky & Nei, 1993). This time–accuracy trade-off is one that we often have to face, but fortunately, with computing (and cloud computing) power increasing all the time, and algorithms becoming more optimised, this will soon be a thing of the past.

The most popular method here is undoubtably the minimum evolution method (ME).

ME is the closest thing phenetics has to cladistic parsimony (a.k.a. Ockham’s Razor): from all the produced trees, it basically selects the one with the shortest branch lengths, i.e. the tree where the least evolution is needed to explain the topology. There are many variations on the theme and differing definitions. Saitou & Nei (1987), for example, simply take the sum of all branch lengths in the tree, while others may give weights to branches depending on how deep within the tree they are.

To the uninitiated, I hope this gave you a glimpse into how much thought needs to go into reconstructing a phylogenetic tree – a systematist really needs to think about the methods they are using and make sure they understand how the methods work and what the results means. Building a phylogenetic tree is definitely not an automated process (when done correctly). For those molecular systematists here, I hope this was nothing more than a refresher for you. If not, I suggest you check out Felsenstein (2004), or ask me to do more in-depth posts with the maths behind all of this.

For fellow morphologists, it’s okay to laugh at the arcaneness of molecular systematics. I do it all the time.

References:

Atteson K. 1999. The performance of neighbor-joining methods of phylogenetic reconstruction. Algorithmica 25, 251-278.

Bruno WJ, Socci ND & Halpern AL. 2000. Weighted Neighbor Joining: A Likelihood-Based Approach to Distance-Based Phylogeny Reconstruction. Molecular Biology and Evolution 17, 189-197.

Cavalli-Sforza LL & Edwards AWF. 1967. Phylogenetic analysis: models and estimation procedures. Evolution 21, 550-570.

Felsenstein J. 2004. Inferring Phylogenies.

Fitch W & Margoliash E. 1967. Construction of phylogenetic trees. Science 155, 279-284.

Rzhetsky A & Nei M. 1993. Theoretical foundation for the minimum-evolution method of phylogenetic inference. Molecular Biology and Evolution 10, 1073-1095.

Saitou N & Nei M. 1987. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Molecular Biology and Evolution 4, 406-425.

Sokal RR & Michener CD. 1958. A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin 38, 1409-1438.

Sokal RR & Sneath PHA. 1963. Numerical Taxonomy.

Leave a Reply