Multi-Omics factor analysis - a framework for …

Multi-Omics factor analysis - a framework for unsupervised integration of multi-omic data sets

Update: This paper has now been published at Molecular Systems Biology.

“Multi-Omics factor analysis - a framework for unsupervised integration of multi-omic data sets” by Ricard Argelaguet, Britta Velten, Damien Arnol, Sascha Dietrich, Thorsten Zenz, John C. Marioni, Wolfgang Huber, Florian Buettner, and Oliver Stegle.

I selected this article for review for two reasons:

  • Methods integrating multiple types of ‘Omics data are in great demand, both in bulk tissues and at the single-cell level.
  • I have an interest in methods that use factor analysis to capture biological and technical sources of variation.

Both reviewers found this paper to be of potential interest to a wide community of people working on integrating data types from different modalities. However, both brought up questions related to the the methodological and implementation details that could be expanded upon. Specifically, it would be great if the authors could (1) provide more intuition and explanation of the factors inferred by MOFA (are they ordered by variance explained? is orthogonality enforced?), (2) more comparisons to baseline and alternative approaches, (3) a more in-depth comparison of the individual components of the model to determine what contributes the most information, and (4) more details on the availability and location of the data used. I believe incorporating these details would provide more intuition and accessibility to a larger audience.

I want to thank the authors for sharing their work on bioRxiv before it has undergone peer review. I also want to take an opportunity to thank the reviewers who have donated their time to evaluate the novelty, strengths, and limitations of this work. One reviewer chose to remain anonymous, and one chose to be named. Both reviewers were faculty. The two reviews are available in full below.

Reviewer 1 (Davide Risso)

The manuscript is well written and the method is statistically rigorous and promises to be very useful, given the increasing number of studies that include multiple data modalities on the same samples. Reading the paper made me immediately want to try this method on my own data!

The model fit is implemented in python, while the downstream analyses, including the exploration and visualization of the results, are implemented in R. Conveniently, the whole analysis can be carried out in R, with what I assume is a wrapper that calls the python modules. I’m not familiar enough with python to provide useful feedback on that part of the implementation. But for what concerns the R package, I was happy to see that it uses well-established Bioconductor data structures (MultiAssayExperiment) and that it provides a detailed user guide with extensive examples.

The approach itself is very flexible. The authors use a Bayesian model that allows the user to choose the appropriate likelihood for any given data type (Gaussian, Poisson, Binomial). It can be used for the visualization, imputation, gene-set enrichment, and clustering of samples for which multiple -omic data modalities have been collected. The extent of the capabilities of the method is very well exemplified by the CLL datasets that the authors analyze in the paper. I was particularly impressed by the computational performance of the model (linear in the number of factors, features, samples, and data modalities).

There are only a few points that were not immediately clear to me while reading the paper that hopefully could be clarified by the authors.

  • Are the factors inferred by MOFA ordered by variance explained? One nice property of PCA is that one can compute all (or a large number) of factors and if later on one realizes that only k are needed, one can simply select the first k and these are guaranteed to be the k factors that explained most variance. Does MOFA have this property? If not, does it mean that one needs to recompute all the factors if they want to try a different threshold on the variance explained?
  • Is orthogonality enforced among the factors? If not, how do the authors ensure that the solution is identifiable? And what happens in terms of interpretation if two factors are correlated? My intuition is that the regularization parameters for W will force orthogonality, but this should be discussed in some details (unless it is and I missed the discussion).
  • This may be just me not fully understanding the Bayesian model, but does the model account for the overdispersion (with respect to Poisson and Binomial distributions) that is typically observed in real data?
  • The data used could be described in more details (perhaps in a supplement if it’s published elsewhere). Also, it’s not clear if (or when) the dataset is / will be publicly available.

Editorial note: This reviewer reviewed the pre-print posted Nov 2017.

Reviewer 2 (Anonymous Reviewer)

The authors describe Multi-omic Factor Analysis (MOFA), an approach that learns sources of variability in a dataset that contains multi-omic measurements. The work is generally interesting. Prior work on unsupervised methods that decompose a dataset into factors are only lightly touched on. Multi-omic analyses for the purposes of identifying disease subtypes are also not heavily discussed. However, both naturally lead to the work in question. The work is flash. One thing that is generally lacking in the contribution is a comparison to the number of obvious potential alternative approaches. Thus, while the work does a fine job of showing that this approach is reasonable, it leaves a methodologist without an understanding of why.

Major concerns:

The authors could do a substantially better job of putting their work into the context of existing data. There are methods for multi-omic subtyping such as SNF that would make for an interesting comparison. The approaches are different, so understanding their relative strengths and limitations could help methodologists design new methods.

It’d be helpful to see how at least a baseline approach like sparse PCA and NMF after combining datasets with reasonable scaling would work in this setting.

There are a number of components of the model (inactive, fully shared, unique, and partially shared). Testing the effect of leaving these out individually would help to understand what contributes to performance in this space.

Minor concerns:

The term “interpretable” is thrown around in a manner that may not be justified. “MOFA infers an interpretable low-dimensional?” The extent to which the low-dimensional representation is interpretable depends somewhat on a definition of interpretable. Can a biologist readily look at the factor and design a new experiment to test the mechanism underlying it? Would this be true for any dataset, or is it specific to this dataset?

In summary, the work is interesting. I would just like to see a bit more presented in terms of which steps are really necessary to make the method work. For clarity, I reviewed the recently updated version of the preprint

Stephanie Hicks is an Assistant Professor in the Department of Biostatistics at Johns Hopkins Bloomberg School of Public Health. She develops statistical methods, tools and open software for the analysis of (epi)genomics, functional genomics and single-cell genomics data.