Showing preview only (204K chars total). Download the full file or copy to clipboard to get everything.
Repository: MarcElosua/SPOTlight
Branch: devel
Commit: 984a7ed4e20f
Files: 45
Total size: 191.4 KB
Directory structure:
gitextract_hdjmozgy/
├── .Rbuildignore
├── .github/
│ └── workflows/
│ └── check-bioc.yml
├── .gitignore
├── DESCRIPTION
├── LICENSE.md
├── NAMESPACE
├── NEWS
├── R/
│ ├── RcppExports.R
│ ├── SPOTlight.R
│ ├── data.R
│ ├── plotCorrelationMatrix.R
│ ├── plotImage.R
│ ├── plotInteractions.R
│ ├── plotSpatialScatterpie.R
│ ├── plotTopicProfiles.R
│ ├── runDeconvolution.R
│ ├── trainNMF.R
│ └── utils.R
├── README.md
├── man/
│ ├── SPOTlight.Rd
│ ├── data.Rd
│ ├── dot-filter.Rd
│ ├── plotCorrelationMatrix.Rd
│ ├── plotImage.Rd
│ ├── plotInteractions.Rd
│ ├── plotSpatialScatterpie.Rd
│ ├── plotTopicProfiles.Rd
│ ├── runDeconvolution.Rd
│ └── trainNMF.Rd
├── src/
│ ├── Makevars
│ ├── Makevars.win
│ ├── RcppExports.cpp
│ └── nmf.cpp
├── tests/
│ ├── testthat/
│ │ ├── test-SPOTlight-steps.R
│ │ ├── test-SPOTlight.R
│ │ ├── test-plotCorrelationMatrix.R
│ │ ├── test-plotImage.R
│ │ ├── test-plotInteractions.R
│ │ ├── test-plotSpatialScatterpie.R
│ │ ├── test-plotTopicProfiles.R
│ │ ├── test-runDeconvolution.R
│ │ ├── test-trainNMF.R
│ │ └── test-utils.R
│ └── testthat.R
└── vignettes/
└── SPOTlight_kidney.Rmd
================================================
FILE CONTENTS
================================================
================================================
FILE: .Rbuildignore
================================================
^.*\.Rproj$
^\.Rproj\.user$
^_pkgdown\.yml$
^docs$
^pkgdown$
================================================
FILE: .github/workflows/check-bioc.yml
================================================
## Read more about GitHub actions the features of this GitHub Actions workflow
## at https://lcolladotor.github.io/biocthis/articles/biocthis.html#use_bioc_github_action
##
## For more details, check the biocthis developer notes vignette at
## https://lcolladotor.github.io/biocthis/articles/biocthis_dev_notes.html
##
## You can add this workflow to other packages using:
## > biocthis::use_bioc_github_action()
##
## Using GitHub Actions exposes you to many details about how R packages are
## compiled and installed in several operating system.s
### If you need help, please follow the steps listed at
## https://github.com/r-lib/actions#where-to-find-help
##
## If you found an issue specific to biocthis's GHA workflow, please report it
## with the information that will make it easier for others to help you.
## Thank you!
## Acronyms:
## * GHA: GitHub Action
## * OS: operating system
on:
push:
pull_request:
name: R-CMD-check-bioc
## These environment variables control whether to run GHA code later on that is
## specific to testthat, covr, and pkgdown.
##
## If you need to clear the cache of packages, update the number inside
## cache-version as discussed at https://github.com/r-lib/actions/issues/86.
## Note that you can always run a GHA test without the cache by using the word
## "/nocache" in the commit message.
env:
has_testthat: 'true'
run_covr: 'true'
run_pkgdown: 'true'
has_RUnit: 'false'
cache-version: 'cache-v1'
run_docker: 'false'
jobs:
build-check:
runs-on: ${{ matrix.config.os }}
name: ${{ matrix.config.os }} (${{ matrix.config.r }})
container: ${{ matrix.config.cont }}
## Environment variables unique to this job.
strategy:
fail-fast: false
matrix:
config:
- { os: ubuntu-latest, r: '4.1', bioc: '3.14', cont: "bioconductor/bioconductor_docker:RELEASE_3_14", rspm: "https://packagemanager.rstudio.com/cran/__linux__/focal/latest" }
- { os: ubuntu-latest, r: '4.2', bioc: '3.15', cont: "bioconductor/bioconductor_docker:devel", rspm: "https://packagemanager.rstudio.com/cran/__linux__/focal/latest" }
- { os: macOS-latest, r: '4.1', bioc: '3.14'}
# - { os: macOS-latest, r: '4.2', bioc: '3.15'}
- { os: windows-latest, r: '4.1', bioc: '3.14'}
# - { os: windows-latest, r: '4.2', bioc: '3.15'}
## Check https://github.com/r-lib/actions/tree/bioc/examples
## for examples using the http-user-agent
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
NOT_CRAN: true
TZ: UTC
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
steps:
## Set the R library to the directory matching the
## R packages cache step further below when running on Docker (Linux).
- name: Set R Library home on Linux
if: runner.os == 'Linux'
run: |
mkdir /__w/_temp/Library
echo ".libPaths('/__w/_temp/Library')" > ~/.Rprofile
## Most of these steps are the same as the ones in
## https://github.com/r-lib/actions/blob/bioc/examples/check-standard.yaml
## If they update their steps, we will also need to update ours.
- name: Checkout Repository
uses: actions/checkout@v2
## R is already included in the Bioconductor docker images
- name: Setup R from r-lib
if: runner.os != 'Linux'
uses: r-lib/actions/setup-r@master
with:
r-version: ${{ matrix.config.r }}
http-user-agent: ${{ matrix.config.http-user-agent }}
## pandoc is already included in the Bioconductor docker images
- name: Setup pandoc from r-lib
if: runner.os != 'Linux'
uses: r-lib/actions/setup-pandoc@master
- name: Query dependencies
run: |
install.packages('remotes')
saveRDS(remotes::dev_package_deps(dependencies = TRUE), ".github/depends.Rds", version = 2)
shell: Rscript {0}
- name: Restore R package cache
if: "!contains(github.event.head_commit.message, '/nocache') && runner.os != 'Linux'"
uses: actions/cache@v2
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ env.cache-version }}-${{ runner.os }}-biocversion-RELEASE_3_14-r-4.1-${{ hashFiles('.github/depends.Rds') }}
restore-keys: ${{ env.cache-version }}-${{ runner.os }}-biocversion-RELEASE_3_14-r-4.1-
- name: Cache R packages on Linux
if: "!contains(github.event.head_commit.message, '/nocache') && runner.os == 'Linux' "
uses: actions/cache@v2
with:
path: /home/runner/work/_temp/Library
key: ${{ env.cache-version }}-${{ runner.os }}-biocversion-RELEASE_3_14-r-4.1-${{ hashFiles('.github/depends.Rds') }}
restore-keys: ${{ env.cache-version }}-${{ runner.os }}-biocversion-RELEASE_3_14-r-4.1-
- name: Install Linux system dependencies
if: runner.os == 'Linux'
run: |
sysreqs=$(Rscript -e 'cat("apt-get update -y && apt-get install -y", paste(gsub("apt-get install -y ", "", remotes::system_requirements("ubuntu", "20.04")), collapse = " "))')
echo $sysreqs
sudo -s eval "$sysreqs"
- name: Install macOS system dependencies
if: matrix.config.os == 'macOS-latest'
run: |
## Enable installing XML from source if needed
brew install libxml2
echo "XML_CONFIG=/usr/local/opt/libxml2/bin/xml2-config" >> $GITHUB_ENV
## Required to install magick as noted at
## https://github.com/r-lib/usethis/commit/f1f1e0d10c1ebc75fd4c18fa7e2de4551fd9978f#diff-9bfee71065492f63457918efcd912cf2
brew install imagemagick@6
## For textshaping, required by ragg, and required by pkgdown
brew install harfbuzz fribidi
## For installing usethis's dependency gert
brew install libgit2
## Required for tcltk
brew install xquartz --cask
- name: Install Windows system dependencies
if: runner.os == 'Windows'
run: |
## Edit below if you have any Windows system dependencies
shell: Rscript {0}
- name: Install BiocManager
run: |
message(paste('****', Sys.time(), 'installing BiocManager ****'))
remotes::install_cran("BiocManager")
shell: Rscript {0}
- name: Set BiocVersion
run: |
BiocManager::install(version = "${{ matrix.config.bioc }}", ask = FALSE, force = TRUE)
shell: Rscript {0}
- name: Install dependencies pass 1
run: |
## Try installing the package dependencies in steps. First the local
## dependencies, then any remaining dependencies to avoid the
## issues described at
## https://stat.ethz.ch/pipermail/bioc-devel/2020-April/016675.html
## https://github.com/r-lib/remotes/issues/296
## Ideally, all dependencies should get installed in the first pass.
## First install basic packages
install.packages(c("knitr", "rmarkdown", "devtools", "sessioninfo"))
## Next install Imports from CRAN
install.packages(c("ggplot2" "Matrix", "matrixStats",
"nnls", "SeuratObject"))
## Next install adjacent dependencies
install.packages(c( "ggcorrplot", "grid", "igraph", "jpeg", "methods",
"png", "scater", "scatterpie", "Seurat"))
## Next Bioconductor packages
BiocManager::install(c( "BiocStyle", "SummarizedExperiment",
"SingleCellExperiment", "ExperimentHub", "scran",
"TENxVisiumData", "TabulaMurisSenisData", "SpatialExperiment"))
## Github packages
suppressWarnings(devtools::install_github("satijalab/seurat-data"))
## Set the repos source depending on the OS
## Alternatively use https://storage.googleapis.com/bioconductor_docker/packages/
## though based on https://bit.ly/bioc2021-package-binaries
## the Azure link will be the main one going forward.
gha_repos <- if(
.Platform$OS.type == "unix" && Sys.info()["sysname"] != "Darwin"
) c(
"AnVIL" = "https://bioconductordocker.blob.core.windows.net/packages/3.14/bioc",
BiocManager::repositories()
) else BiocManager::repositories()
## For running the checks
message(paste('****', Sys.time(), 'installing rcmdcheck and BiocCheck ****'))
install.packages("rcmdcheck", repos = gha_repos)
BiocManager::install("BiocCheck")
## Pass #1 at installing dependencies
## This pass uses AnVIL-powered fast binaries
## details at https://github.com/nturaga/bioc2021-bioconductor-binaries
## The speed gains only apply to the docker builds.
message(paste('****', Sys.time(), 'pass number 1 at installing dependencies: local dependencies ****'))
remotes::install_local(dependencies = TRUE, repos = gha_repos, build_vignettes = FALSE, upgrade = TRUE)
continue-on-error: true
shell: Rscript {0}
- name: Install dependencies pass 2
run: |
## Pass #2 at installing dependencies
## This pass does not use AnVIL and will thus update any packages
## that have seen been updated in Bioconductor
message(paste('****', Sys.time(), 'pass number 2 at installing dependencies: any remaining dependencies ****'))
remotes::install_local(dependencies = TRUE, repos = BiocManager::repositories(), build_vignettes = TRUE, upgrade = TRUE, force = TRUE)
shell: Rscript {0}
- name: Install BiocGenerics
if: env.has_RUnit == 'true'
run: |
## Install BiocGenerics
BiocManager::install("BiocGenerics")
shell: Rscript {0}
- name: Install covr
if: github.ref == 'refs/heads/main' && env.run_covr == 'true' && runner.os == 'Linux'
run: |
remotes::install_cran("covr")
shell: Rscript {0}
- name: Install pkgdown
if: github.ref == 'refs/heads/main' && env.run_pkgdown == 'true' && runner.os == 'Linux'
run: |
remotes::install_cran("pkgdown")
shell: Rscript {0}
- name: Session info
run: |
options(width = 100)
pkgs <- installed.packages()[, "Package"]
sessioninfo::session_info(pkgs, include_base = TRUE)
shell: Rscript {0}
- name: Run CMD check
env:
_R_CHECK_CRAN_INCOMING_: false
DISPLAY: 99.0
run: |
options(crayon.enabled = TRUE)
rcmdcheck::rcmdcheck(
args = c("--no-manual", "--no-vignettes", "--timings"),
build_args = c("--no-manual", "--keep-empty-dirs", "--no-resave-data"),
error_on = "warning",
check_dir = "check"
)
shell: Rscript {0}
## Might need an to add this to the if: && runner.os == 'Linux'
- name: Reveal testthat details
if: env.has_testthat == 'true'
run: find . -name testthat.Rout -exec cat '{}' ';'
- name: Run RUnit tests
if: env.has_RUnit == 'true'
run: |
BiocGenerics:::testPackage()
shell: Rscript {0}
- name: Run BiocCheck
env:
DISPLAY: 99.0
run: |
BiocCheck::BiocCheck(
dir('check', 'tar.gz$', full.names = TRUE),
`quit-with-status` = TRUE,
`no-check-R-ver` = TRUE,
`no-check-bioc-help` = TRUE
# `no-check-pkg-size` = TRUE
)
shell: Rscript {0}
- name: Test coverage
if: github.ref == 'refs/heads/main' && env.run_covr == 'true' && runner.os == 'Linux'
run: |
covr::codecov()
shell: Rscript {0}
- name: Install package
if: github.ref == 'refs/heads/main' && env.run_pkgdown == 'true' && runner.os == 'Linux'
run: R CMD INSTALL .
- name: Build and deploy pkgdown site
if: github.ref == 'refs/heads/main' && env.run_pkgdown == 'true' && runner.os == 'Linux'
run: |
git config --local user.name "$GITHUB_ACTOR"
git config --local user.email "$GITHUB_ACTOR@users.noreply.github.com"
Rscript -e "pkgdown::deploy_to_branch(new_process = FALSE)"
shell: bash {0}
## Note that you need to run pkgdown::deploy_to_branch(new_process = FALSE)
## at least one locally before this will work. This creates the gh-pages
## branch (erasing anything you haven't version controlled!) and
## makes the git history recognizable by pkgdown.
- name: Upload check results
if: failure()
uses: actions/upload-artifact@master
with:
name: ${{ runner.os }}-biocversion-RELEASE_3_14-r-4.1-results
path: check
- uses: docker/build-push-action@v1
if: "!contains(github.event.head_commit.message, '/nodocker') && env.run_docker == 'true' && runner.os == 'Linux' "
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: marcelosua/spotlight
tag_with_ref: true
tag_with_sha: true
tags: latest
================================================
FILE: .gitignore
================================================
.Rproj.user
*.Rproj
.git/
.DS_Store
.Rhistory
.RData
.Ruserdata
vignettes/*.html
vignettes/*/
*.pdf
*/**/.pdf
docs
================================================
FILE: DESCRIPTION
================================================
Package: SPOTlight
Version: 1.13.2
Type: Package
Title: `SPOTlight`: Spatial Transcriptomics Deconvolution
Description: `SPOTlight` provides a method to deconvolute spatial transcriptomics
spots using a seeded NMF approach along with visualization tools to assess
the results. Spatially resolved gene expression profiles are key to
understand tissue organization and function. However, novel spatial
transcriptomics (ST) profiling techniques lack single-cell resolution and
require a combination with single-cell RNA sequencing (scRNA-seq)
information to deconvolute the spatially indexed datasets. Leveraging the
strengths of both data types, we developed SPOTlight, a computational tool
that enables the integration of ST with scRNA-seq data to infer the location
of cell types and states within a complex tissue. SPOTlight is centered
around a seeded non-negative matrix factorization (NMF) regression,
initialized using cell-type marker genes and non-negative least squares
(NNLS) to subsequently deconvolute ST capture locations (spots).
Authors@R: c(
person("Marc", "Elosua-Bayes", email="elosua.marc@gmail.com", role=c("aut", "cre")),
person("Zachary", "DeBruine", email="zacharydebruine@gmail.com", role="aut"),
person("Helena L.", "Crowell", email="helena@crowell.eu", role="aut"))
Depends: R (>= 4.5.0)
Imports:
ggplot2,
Matrix,
SingleCellExperiment,
sparseMatrixStats,
stats
Suggests:
BiocStyle,
colorBlindness,
DelayedArray,
DropletUtils,
ExperimentHub,
ggcorrplot,
grDevices,
grid,
igraph,
jpeg,
knitr,
methods,
png,
rmarkdown,
scater,
scatterpie,
scran,
SpatialExperiment,
SummarizedExperiment,
S4Vectors,
TabulaMurisSenisData,
TENxVisiumData,
testthat
LinkingTo:
Rcpp,
RcppEigen
biocViews:
SingleCell,
Spatial,
StatisticalMethod
License: GPL-3
Encoding: UTF-8
RoxygenNote: 7.3.3
VignetteBuilder: knitr
URL: https://github.com/MarcElosua/SPOTlight
BugReports: https://github.com/MarcElosua/SPOTlight/issues
================================================
FILE: LICENSE.md
================================================
GNU General Public License
==========================
_Version 3, 29 June 2007_
_Copyright © 2007 Free Software Foundation, Inc. <<http://fsf.org/>>_
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
## Preamble
The GNU General Public License is a free, copyleft license for software and other
kinds of works.
The licenses for most software and other practical works are designed to take away
your freedom to share and change the works. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change all versions of a
program--to make sure it remains free software for all its users. We, the Free
Software Foundation, use the GNU General Public License for most of our software; it
applies also to any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General
Public Licenses are designed to make sure that you have the freedom to distribute
copies of free software (and charge for them if you wish), that you receive source
code or can get it if you want it, that you can change the software or use pieces of
it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or
asking you to surrender the rights. Therefore, you have certain responsibilities if
you distribute copies of the software, or if you modify it: responsibilities to
respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee,
you must pass on to the recipients the same freedoms that you received. You must make
sure that they, too, receive or can get the source code. And you must show them these
terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: **(1)** assert
copyright on the software, and **(2)** offer you this License giving you legal permission
to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is
no warranty for this free software. For both users' and authors' sake, the GPL
requires that modified versions be marked as changed, so that their problems will not
be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of
the software inside them, although the manufacturer can do so. This is fundamentally
incompatible with the aim of protecting users' freedom to change the software. The
systematic pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we have designed
this version of the GPL to prohibit the practice for those products. If such problems
arise substantially in other domains, we stand ready to extend this provision to
those domains in future versions of the GPL, as needed to protect the freedom of
users.
Finally, every program is threatened constantly by software patents. States should
not allow patents to restrict development and use of software on general-purpose
computers, but in those that do, we wish to avoid the special danger that patents
applied to a free program could make it effectively proprietary. To prevent this, the
GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
## TERMS AND CONDITIONS
### 0. Definitions
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this
License. Each licensee is addressed as “you”. “Licensees” and
“recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in
a fashion requiring copyright permission, other than the making of an exact copy. The
resulting work is called a “modified version” of the earlier work or a
work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on
the Program.
To “propagate” a work means to do anything with it that, without
permission, would make you directly or secondarily liable for infringement under
applicable copyright law, except executing it on a computer or modifying a private
copy. Propagation includes copying, distribution (with or without modification),
making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through a computer
network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the
extent that it includes a convenient and prominently visible feature that **(1)**
displays an appropriate copyright notice, and **(2)** tells the user that there is no
warranty for the work (except to the extent that warranties are provided), that
licensees may convey the work under this License, and how to view a copy of this
License. If the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
### 1. Source Code
The “source code” for a work means the preferred form of the work for
making modifications to it. “Object code” means any non-source form of a
work.
A “Standard Interface” means an interface that either is an official
standard defined by a recognized standards body, or, in the case of interfaces
specified for a particular programming language, one that is widely used among
developers working in that language.
The “System Libraries” of an executable work include anything, other than
the work as a whole, that **(a)** is included in the normal form of packaging a Major
Component, but which is not part of that Major Component, and **(b)** serves only to
enable use of the work with that Major Component, or to implement a Standard
Interface for which an implementation is available to the public in source code form.
A “Major Component”, in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system (if any) on which
the executable work runs, or a compiler used to produce the work, or an object code
interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the
source code needed to generate, install, and (for an executable work) run the object
code and to modify the work, including scripts to control those activities. However,
it does not include the work's System Libraries, or general-purpose tools or
generally available free programs which are used unmodified in performing those
activities but which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for the work, and
the source code for shared libraries and dynamically linked subprograms that the work
is specifically designed to require, such as by intimate data communication or
control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate
automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
### 2. Basic Permissions
All rights granted under this License are granted for the term of copyright on the
Program, and are irrevocable provided the stated conditions are met. This License
explicitly affirms your unlimited permission to run the unmodified Program. The
output from running a covered work is covered by this License only if the output,
given its content, constitutes a covered work. This License acknowledges your rights
of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without
conditions so long as your license otherwise remains in force. You may convey covered
works to others for the sole purpose of having them make modifications exclusively
for you, or provide you with facilities for running those works, provided that you
comply with the terms of this License in conveying all material for which you do not
control copyright. Those thus making or running the covered works for you must do so
exclusively on your behalf, under your direction and control, on terms that prohibit
them from making any copies of your copyrighted material outside their relationship
with you.
Conveying under any other circumstances is permitted solely under the conditions
stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law
No covered work shall be deemed part of an effective technological measure under any
applicable law fulfilling obligations under article 11 of the WIPO copyright treaty
adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention
of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of
technological measures to the extent such circumvention is effected by exercising
rights under this License with respect to the covered work, and you disclaim any
intention to limit operation or modification of the work as a means of enforcing,
against the work's users, your or third parties' legal rights to forbid circumvention
of technological measures.
### 4. Conveying Verbatim Copies
You may convey verbatim copies of the Program's source code as you receive it, in any
medium, provided that you conspicuously and appropriately publish on each copy an
appropriate copyright notice; keep intact all notices stating that this License and
any non-permissive terms added in accord with section 7 apply to the code; keep
intact all notices of the absence of any warranty; and give all recipients a copy of
this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer
support or warranty protection for a fee.
### 5. Conveying Modified Source Versions
You may convey a work based on the Program, or the modifications to produce it from
the Program, in the form of source code under the terms of section 4, provided that
you also meet all of these conditions:
* **a)** The work must carry prominent notices stating that you modified it, and giving a
relevant date.
* **b)** The work must carry prominent notices stating that it is released under this
License and any conditions added under section 7. This requirement modifies the
requirement in section 4 to “keep intact all notices”.
* **c)** You must license the entire work, as a whole, under this License to anyone who
comes into possession of a copy. This License will therefore apply, along with any
applicable section 7 additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no permission to license the
work in any other way, but it does not invalidate such permission if you have
separately received it.
* **d)** If the work has interactive user interfaces, each must display Appropriate Legal
Notices; however, if the Program has interactive interfaces that do not display
Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are
not by their nature extensions of the covered work, and which are not combined with
it such as to form a larger program, in or on a volume of a storage or distribution
medium, is called an “aggregate” if the compilation and its resulting
copyright are not used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work in an aggregate
does not cause this License to apply to the other parts of the aggregate.
### 6. Conveying Non-Source Forms
You may convey a covered work in object code form under the terms of sections 4 and
5, provided that you also convey the machine-readable Corresponding Source under the
terms of this License, in one of these ways:
* **a)** Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by the Corresponding Source fixed on a
durable physical medium customarily used for software interchange.
* **b)** Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by a written offer, valid for at least
three years and valid for as long as you offer spare parts or customer support for
that product model, to give anyone who possesses the object code either **(1)** a copy of
the Corresponding Source for all the software in the product that is covered by this
License, on a durable physical medium customarily used for software interchange, for
a price no more than your reasonable cost of physically performing this conveying of
source, or **(2)** access to copy the Corresponding Source from a network server at no
charge.
* **c)** Convey individual copies of the object code with a copy of the written offer to
provide the Corresponding Source. This alternative is allowed only occasionally and
noncommercially, and only if you received the object code with such an offer, in
accord with subsection 6b.
* **d)** Convey the object code by offering access from a designated place (gratis or for
a charge), and offer equivalent access to the Corresponding Source in the same way
through the same place at no further charge. You need not require recipients to copy
the Corresponding Source along with the object code. If the place to copy the object
code is a network server, the Corresponding Source may be on a different server
(operated by you or a third party) that supports equivalent copying facilities,
provided you maintain clear directions next to the object code saying where to find
the Corresponding Source. Regardless of what server hosts the Corresponding Source,
you remain obligated to ensure that it is available for as long as needed to satisfy
these requirements.
* **e)** Convey the object code using peer-to-peer transmission, provided you inform
other peers where the object code and Corresponding Source of the work are being
offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the
Corresponding Source as a System Library, need not be included in conveying the
object code work.
A “User Product” is either **(1)** a “consumer product”, which
means any tangible personal property which is normally used for personal, family, or
household purposes, or **(2)** anything designed or sold for incorporation into a
dwelling. In determining whether a product is a consumer product, doubtful cases
shall be resolved in favor of coverage. For a particular product received by a
particular user, “normally used” refers to a typical or common use of
that class of product, regardless of the status of the particular user or of the way
in which the particular user actually uses, or expects or is expected to use, the
product. A product is a consumer product regardless of whether the product has
substantial commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
“Installation Information” for a User Product means any methods,
procedures, authorization keys, or other information required to install and execute
modified versions of a covered work in that User Product from a modified version of
its Corresponding Source. The information must suffice to ensure that the continued
functioning of the modified object code is in no case prevented or interfered with
solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for
use in, a User Product, and the conveying occurs as part of a transaction in which
the right of possession and use of the User Product is transferred to the recipient
in perpetuity or for a fixed term (regardless of how the transaction is
characterized), the Corresponding Source conveyed under this section must be
accompanied by the Installation Information. But this requirement does not apply if
neither you nor any third party retains the ability to install modified object code
on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to
continue to provide support service, warranty, or updates for a work that has been
modified or installed by the recipient, or for the User Product in which it has been
modified or installed. Access to a network may be denied when the modification itself
materially and adversely affects the operation of the network or violates the rules
and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with
this section must be in a format that is publicly documented (and with an
implementation available to the public in source code form), and must require no
special password or key for unpacking, reading or copying.
### 7. Additional Terms
“Additional permissions” are terms that supplement the terms of this
License by making exceptions from one or more of its conditions. Additional
permissions that are applicable to the entire Program shall be treated as though they
were included in this License, to the extent that they are valid under applicable
law. If additional permissions apply only to part of the Program, that part may be
used separately under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any
additional permissions from that copy, or from any part of it. (Additional
permissions may be written to require their own removal in certain cases when you
modify the work.) You may place additional permissions on material, added by you to a
covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a
covered work, you may (if authorized by the copyright holders of that material)
supplement the terms of this License with terms:
* **a)** Disclaiming warranty or limiting liability differently from the terms of
sections 15 and 16 of this License; or
* **b)** Requiring preservation of specified reasonable legal notices or author
attributions in that material or in the Appropriate Legal Notices displayed by works
containing it; or
* **c)** Prohibiting misrepresentation of the origin of that material, or requiring that
modified versions of such material be marked in reasonable ways as different from the
original version; or
* **d)** Limiting the use for publicity purposes of names of licensors or authors of the
material; or
* **e)** Declining to grant rights under trademark law for use of some trade names,
trademarks, or service marks; or
* **f)** Requiring indemnification of licensors and authors of that material by anyone
who conveys the material (or modified versions of it) with contractual assumptions of
liability to the recipient, for any liability that these contractual assumptions
directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further
restrictions” within the meaning of section 10. If the Program as you received
it, or any part of it, contains a notice stating that it is governed by this License
along with a term that is a further restriction, you may remove that term. If a
license document contains a further restriction but permits relicensing or conveying
under this License, you may add to a covered work material governed by the terms of
that license document, provided that the further restriction does not survive such
relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in
the relevant source files, a statement of the additional terms that apply to those
files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a
separately written license, or stated as exceptions; the above requirements apply
either way.
### 8. Termination
You may not propagate or modify a covered work except as expressly provided under
this License. Any attempt otherwise to propagate or modify it is void, and will
automatically terminate your rights under this License (including any patent licenses
granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a
particular copyright holder is reinstated **(a)** provisionally, unless and until the
copyright holder explicitly and finally terminates your license, and **(b)** permanently,
if the copyright holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently
if the copyright holder notifies you of the violation by some reasonable means, this
is the first time you have received notice of violation of this License (for any
work) from that copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of
parties who have received copies or rights from you under this License. If your
rights have been terminated and not permanently reinstated, you do not qualify to
receive new licenses for the same material under section 10.
### 9. Acceptance Not Required for Having Copies
You are not required to accept this License in order to receive or run a copy of the
Program. Ancillary propagation of a covered work occurring solely as a consequence of
using peer-to-peer transmission to receive a copy likewise does not require
acceptance. However, nothing other than this License grants you permission to
propagate or modify any covered work. These actions infringe copyright if you do not
accept this License. Therefore, by modifying or propagating a covered work, you
indicate your acceptance of this License to do so.
### 10. Automatic Licensing of Downstream Recipients
Each time you convey a covered work, the recipient automatically receives a license
from the original licensors, to run, modify and propagate that work, subject to this
License. You are not responsible for enforcing compliance by third parties with this
License.
An “entity transaction” is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an organization, or
merging organizations. If propagation of a covered work results from an entity
transaction, each party to that transaction who receives a copy of the work also
receives whatever licenses to the work the party's predecessor in interest had or
could give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if the predecessor
has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or
affirmed under this License. For example, you may not impose a license fee, royalty,
or other charge for exercise of rights granted under this License, and you may not
initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging
that any patent claim is infringed by making, using, selling, offering for sale, or
importing the Program or any portion of it.
### 11. Patents
A “contributor” is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The work thus
licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or
controlled by the contributor, whether already acquired or hereafter acquired, that
would be infringed by some manner, permitted by this License, of making, using, or
selling its contributor version, but do not include claims that would be infringed
only as a consequence of further modification of the contributor version. For
purposes of this definition, “control” includes the right to grant patent
sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license
under the contributor's essential patent claims, to make, use, sell, offer for sale,
import and otherwise run, modify and propagate the contents of its contributor
version.
In the following three paragraphs, a “patent license” is any express
agreement or commitment, however denominated, not to enforce a patent (such as an
express permission to practice a patent or covenant not to sue for patent
infringement). To “grant” such a patent license to a party means to make
such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the
Corresponding Source of the work is not available for anyone to copy, free of charge
and under the terms of this License, through a publicly available network server or
other readily accessible means, then you must either **(1)** cause the Corresponding
Source to be so available, or **(2)** arrange to deprive yourself of the benefit of the
patent license for this particular work, or **(3)** arrange, in a manner consistent with
the requirements of this License, to extend the patent license to downstream
recipients. “Knowingly relying” means you have actual knowledge that, but
for the patent license, your conveying the covered work in a country, or your
recipient's use of the covered work in a country, would infringe one or more
identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you
convey, or propagate by procuring conveyance of, a covered work, and grant a patent
license to some of the parties receiving the covered work authorizing them to use,
propagate, modify or convey a specific copy of the covered work, then the patent
license you grant is automatically extended to all recipients of the covered work and
works based on it.
A patent license is “discriminatory” if it does not include within the
scope of its coverage, prohibits the exercise of, or is conditioned on the
non-exercise of one or more of the rights that are specifically granted under this
License. You may not convey a covered work if you are a party to an arrangement with
a third party that is in the business of distributing software, under which you make
payment to the third party based on the extent of your activity of conveying the
work, and under which the third party grants, to any of the parties who would receive
the covered work from you, a discriminatory patent license **(a)** in connection with
copies of the covered work conveyed by you (or copies made from those copies), or **(b)**
primarily for and in connection with specific products or compilations that contain
the covered work, unless you entered into that arrangement, or that patent license
was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied
license or other defenses to infringement that may otherwise be available to you
under applicable patent law.
### 12. No Surrender of Others' Freedom
If conditions are imposed on you (whether by court order, agreement or otherwise)
that contradict the conditions of this License, they do not excuse you from the
conditions of this License. If you cannot convey a covered work so as to satisfy
simultaneously your obligations under this License and any other pertinent
obligations, then as a consequence you may not convey it at all. For example, if you
agree to terms that obligate you to collect a royalty for further conveying from
those to whom you convey the Program, the only way you could satisfy both those terms
and this License would be to refrain entirely from conveying the Program.
### 13. Use with the GNU Affero General Public License
Notwithstanding any other provision of this License, you have permission to link or
combine any covered work with a work licensed under version 3 of the GNU Affero
General Public License into a single combined work, and to convey the resulting work.
The terms of this License will continue to apply to the part which is the covered
work, but the special requirements of the GNU Affero General Public License, section
13, concerning interaction through a network will apply to the combination as such.
### 14. Revised Versions of this License
The Free Software Foundation may publish revised and/or new versions of the GNU
General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that
a certain numbered version of the GNU General Public License “or any later
version” applies to it, you have the option of following the terms and
conditions either of that numbered version or of any later version published by the
Free Software Foundation. If the Program does not specify a version number of the GNU
General Public License, you may choose any version ever published by the Free
Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU
General Public License can be used, that proxy's public statement of acceptance of a
version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no
additional obligations are imposed on any author or copyright holder as a result of
your choosing to follow a later version.
### 15. Disclaimer of Warranty
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
### 16. Limitation of Liability
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS
PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE
OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE
WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
### 17. Interpretation of Sections 15 and 16
If the disclaimer of warranty and limitation of liability provided above cannot be
given local legal effect according to their terms, reviewing courts shall apply local
law that most closely approximates an absolute waiver of all civil liability in
connection with the Program, unless a warranty or assumption of liability accompanies
a copy of the Program in return for a fee.
_END OF TERMS AND CONDITIONS_
## How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to
the public, the best way to achieve this is to make it free software which everyone
can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them
to the start of each source file to most effectively state the exclusion of warranty;
and each file should have at least the “copyright” line and a pointer to
where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this
when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'show c' for details.
The hypothetical commands `show w` and `show c` should show the appropriate parts of
the General Public License. Of course, your program's commands might be different;
for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to
sign a “copyright disclaimer” for the program, if necessary. For more
information on this, and how to apply and follow the GNU GPL, see
<<http://www.gnu.org/licenses/>>.
The GNU General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider it
more useful to permit linking proprietary applications with the library. If this is
what you want to do, use the GNU Lesser General Public License instead of this
License. But first, please read
<<http://www.gnu.org/philosophy/why-not-lgpl.html>>.
================================================
FILE: NAMESPACE
================================================
# Generated by roxygen2: do not edit by hand
export(SPOTlight)
export(getMGS)
export(mockSC)
export(mockSP)
export(plotCorrelationMatrix)
export(plotImage)
export(plotInteractions)
export(plotSpatialScatterpie)
export(plotTopicProfiles)
export(runDeconvolution)
export(trainNMF)
import(ggplot2)
importFrom(Matrix,Matrix)
importFrom(Matrix,colSums)
importFrom(Matrix,rowSums)
importFrom(Matrix,t)
importFrom(SingleCellExperiment,SingleCellExperiment)
importFrom(SingleCellExperiment,cbind)
importFrom(SingleCellExperiment,colLabels)
importFrom(SingleCellExperiment,counts)
importFrom(grid,rasterGrob)
importFrom(grid,unit)
importFrom(sparseMatrixStats,colMedians)
importFrom(sparseMatrixStats,rowAlls)
importFrom(sparseMatrixStats,rowSds)
importFrom(sparseMatrixStats,rowSums2)
importFrom(stats,aggregate)
importFrom(stats,cor)
importFrom(stats,median)
importFrom(stats,rnbinom)
importFrom(stats,runif)
useDynLib(SPOTlight, .registration = TRUE)
================================================
FILE: NEWS
================================================
v0.99.1 ------------------------------------------------------------------------
- text
v0.99.0 ------------------------------------------------------------------------
- initial submission to Bioc devel v3.15
================================================
FILE: R/RcppExports.R
================================================
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
predict_nmf <- function(A_, w, L1, L2, threads) {
.Call(`_SPOTlight_predict_nmf`, A_, w, L1, L2, threads)
}
run_nmf <- function(A_, At_, tol, maxit, verbose, L1, L2, threads, w) {
.Call(`_SPOTlight_run_nmf`, A_, At_, tol, maxit, verbose, L1, L2, threads, w)
}
================================================
FILE: R/SPOTlight.R
================================================
#' @name SPOTlight
#' @title Deconvolution of mixture using single-cell data
#'
#' @description This is the backbone function which takes in single cell
#' expression data to deconvolute spatial transcriptomics spots.
#'
#' @param x,y single-cell and mixture dataset, respectively. Can be a
#' numeric matrix or \code{SingleCellExperiment}..
#' @param groups character vector of group labels for cells in \code{x}.
#' When \code{x} is a \code{SingleCellExperiment}.,
#' defaults to \code{colLabels(x)} and \code{Idents(x)}, respectively.
#' Make sure groups is not a Factor.
#' @param mgs \code{data.frame} or \code{DataFrame} of marker genes.
#' Must contain columns holding gene identifiers, group labels and
#' the weight (e.g., logFC, -log(p-value) a feature has in a given group.
#' @param hvg character vector containing hvg to include in the model.
#' By default NULL.
#' @param gene_id,group_id,weight_id character specifying the column
#' in \code{mgs} containing gene identifiers, group labels and weights,
#' respectively.
#' @param scale logical specifying whether to scale single-cell counts to unit
#' variance. This gives the user the option to normalize the data beforehand
#' as you see fit (CPM, FPKM, ...) when passing a matrix or specifying the
#' slot from where to extract the count data.
#' @param n_top integer scalar specifying the number of markers to select per
#' group. By default NULL uses all the marker genes to initialize the model.
#' @param slot_sc,slot_sp If the object is of class \code{SingleCellExperiment}
#' indicates matrix to use. By default "counts".
#' @param L1_nmf LASSO penalty in the range (0, 1] for NMF,
#' larger values increase sparsity of each factor
#' @param L2_nmf RUDGE penalty >0 for NMF,
#' larger values increase angle between factors and thus sparsity.
#' @param tol tolerance of the NMF model at convergence, the Pearson correlation
#' distance between models across consecutive iterations (1e-5 is publication quality)
#' @param maxit maximum number of NMF iterations for fitting
#' @param threads number of threads to use, default 0 (all threads)
#' @param verbose logical. Should information on progress be reported?
#' @param min_prop scalar in [0,1] setting the minimum contribution
#' expected from a cell type in \code{x} to observations in \code{y}.
#' By default 0.
#' @param L1_nnls_topics,L1_nnls_prop LASSO penalty in the range (0, 1] for NNLS
#' when computing cell type topic profiles and cell type proportions
#' respectively. Larger values remove "noisy" contributions more aggressively.
#' @param L2_nnls_topics,L2_nnls_prop RIDGE penalty >0 for NNLS when computing
#' cell type topic profiles and cell type proportions respectively.
#' Larger values remove "noisy" contributions more aggressively.
#' @param ... additional parameters.
#'
#' @return a numeric matrix with rows corresponding to samples
#' and columns to groups
#'
#' @author Marc Elosua Bayes, Zach DeBruine, and Helena L Crowell
#'
#' @details SPOTlight uses a Non-Negative Matrix Factorization approach to learn
#' which genes are important for each cell type. In order to drive the
#' factorization and give more importance to cell type marker genes we
#' previously compute them and use them to initialize the basis matrix. This
#' initialized matrices will then be used to carry out the factorization with
#' the single cell expression data. Once the model has learn the topic
#' profiles for each cell type we use non-negative least squares (NNLS) to
#' obtain the topic contributions to each spot. Lastly, NNLS is again used to
#' obtain the proportion of each cell type for each spot by finding the
#' fitting the single-cell topic profiles to the spots topic contributions.
#'
#' @examples
#' library(scater)
#' library(scran)
#'
#' # Use Mock data
#' # Refer to the vignette for a full workflow
#' sce <- mockSC(ng = 200, nc = 10, nt = 3)
#' spe <- mockSP(sce)
#' mgs <- getMGS(sce)
#'
#' res <- SPOTlight(
#' x = counts(sce),
#' y = counts(spe),
#' groups = as.character(sce$type),
#' mgs = mgs,
#' hvg = NULL,
#' weight_id = "weight",
#' group_id = "type",
#' gene_id = "gene")
NULL
#' @rdname SPOTlight
#' @export
#' @useDynLib SPOTlight, .registration = TRUE
#'
SPOTlight <- function(
x,
y,
groups = NULL,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight",
hvg = NULL,
scale = TRUE,
min_prop = 0.01,
verbose = TRUE,
slot_sc = "counts",
slot_sp = "counts",
L1_nmf = 0,
L2_nmf = 0,
maxit = 100,
threads = 0,
tol = 1e-5,
L1_nnls_topics = 0,
L2_nnls_topics = 0,
L1_nnls_prop = 0,
L2_nnls_prop = 0,
...) {
# train NMF model
mod_ls <- trainNMF(
x = x,
y = rownames(y),
groups = groups,
mgs = mgs,
n_top = n_top,
gene_id = gene_id,
group_id = group_id,
weight_id = weight_id,
hvg = hvg,
verbose = verbose,
slot_sc = slot_sc,
L1_nmf = L1_nmf,
L2_nmf = L2_nmf,
tol = tol,
threads = threads,
maxit = maxit,
...)
# perform deconvolution
res <- runDeconvolution(
x = y,
mod = mod_ls[["mod"]],
ref = mod_ls[["topic"]],
scale = scale,
min_prop = min_prop,
verbose = verbose,
slot = slot_sp,
L1_nnls_topics = L1_nnls_topics,
L2_nnls_topics = L2_nnls_topics,
L1_nnls_prop = L1_nnls_prop,
L2_nnls_prop = L2_nnls_prop)
# return list of NMF model & deconvolution matrix
list(
"mat" = res[["mat"]],
"res_ss" = res[["res_ss"]],
"NMF" = mod_ls[["mod"]])
}
================================================
FILE: R/data.R
================================================
#' @rdname data
#' @name data
#' @aliases mockSC mockSP getMGS
#' @title Synthetic single-cell, mixture and marker data
#'
#' @description
#' \code{mockSC/mockSP()} are designed to generate synthetic single-cell and
#' spatial mixture data. These data are not meant to represent biologically
#' meaningful use-cases, but are solely intended for use in examples, for
#' unit-testing, and to demonstrate \code{SPOTlight}'s general functionality.
#' Finally, \code{.get_mgs()} implements a statistically naive way to select
#' markers from single-cell data; again, please don't use it in real life.
#'
#' @param ng,nc,nt,ns integer scalar specifying the number
#' of genes, cells, types (groups) and spots to simulate.
#' @param n_top integer scalar specifying the number
#' of markers to select per group.
#'
#' @return
#' \itemize{
#' \item{\code{mockSC} returns a \code{SingleCellExperiment}
#' with rows = genes, columns = single cells, and cell metadata
#' (\code{colData}) column \code{type} containing group identifiers.}
#' \item{\code{mockSP} returns a \code{SingleCellExperiment}
#' with rows = genes, columns = single cells, and cell metadata
#' (\code{colData}) column \code{type} containing group identifiers.}
#' \item{\code{getMGS} returns a \code{data.frame} with \code{nt*n_top}
#' rows and 3 columns: gene and type (group) identifier, as well as the
#' gene's weight = the proportion of counts accounted for by that type.}
#' }
#'
#' @examples
#' sce <- mockSC()
#' spe <- mockSP(sce)
#' mgs <- getMGS(sce)
NULL
#' @rdname data
#' @importFrom SingleCellExperiment cbind SingleCellExperiment
#' @importFrom stats rnbinom runif
#' @export
mockSC <- function(ng = 200, nc = 50, nt = 3) {
z <- lapply(seq_len(nt), function(t) {
ms <- 2^runif(ng, 2, 10)
ds <- 0.5 + 100 / ms
y <- rnbinom(ng * nc, mu = ms, size = 1 / ds)
y <- matrix(y, nrow = ng, ncol = nc)
dimnames(y) <- list(
paste0("gene", seq_len(ng)),
paste0("cell", seq_len(nc))
)
x <- SingleCellExperiment(list(counts = y))
x$type <- factor(
paste0("type", t),
paste0("type", seq_len(nt))
)
return(x)
})
zbind <- do.call(cbind, z)
colnames(zbind) <- make.unique(colnames(zbind))
zbind
}
#' @rdname data
#' @param x Single cell experiment object
#' @importFrom Matrix rowSums
#' @importFrom SingleCellExperiment SingleCellExperiment
#' @export
mockSP <- function(x, ns = 100) {
z <- replicate(ns, {
# sample number of cells
nc <- sample(5, 1)
# sample reference cells
cs <- sample(ncol(x), nc)
# sum up counts & rescale
y <- counts(x[, cs])
y <- rowSums(y)
# compute composition
n <- table(x$type[cs]) / nc
n <- c(unclass(n))
list(y, n)
})
# get counts
y <- t(do.call(rbind, z[1, ]))
dimnames(y) <- list(
rownames(x),
paste0("spot", seq_len(ns))
)
# get compositions
fq <- do.call(rbind, z[2, ])
rownames(fq) <- colnames(y)
# sample coordinates
xy <- matrix(runif(2 * ns), ncol = 2)
dimnames(xy) <- list(colnames(y), c("x", "y"))
SingleCellExperiment(
list(counts = y),
colData = data.frame(xy),
metadata = list(props = fq)
)
}
#' @rdname data
#' @param n_top integer specifying the number of
#' marker genes to extract for each cluster.
#' @importFrom Matrix colSums rowSums
#' @importFrom SingleCellExperiment counts
#' @importFrom stats aggregate
#' @export
getMGS <- function(x, n_top = 10) {
# compute sum of counts by group
y <- aggregate(t(counts(x)), list(x$type), sum)
rownames(y) <- y[, 1]
# Remove group column
y <- t(y[, -1])
# get proportion of counts by group
z <- lapply(rownames(y), function(gene) {
p <- prop.table(y[gene, ])
i <- which.max(p)
type <- names(i)
weight <- p[i]
data.frame(gene, type, weight)
})
z <- do.call(rbind, z)
rownames(z) <- NULL
# select 'top_n' in each group
z <- split(z, z$type)
# Iterate over groups and sort within them
z <- lapply(z, function(.) {
# Get indexes of the positions in the sorted order
o <- order(.$weight, decreasing = TRUE)
# order the markers
n <- nrow(.)
if (n < n_top)
n_top <- n
.[o, ][seq_len(n_top), ]
})
z <- do.call(rbind, z)
rownames(z) <- NULL
return(z)
}
================================================
FILE: R/plotCorrelationMatrix.R
================================================
#' @rdname plotCorrelationMatrix
#' @name plotCorrelationMatrix
#' @title Plot Correlation Matrix
#'
#' @description This function takes in a matrix with the predicted proportions
#' for each spot and returns a correlation matrix between cell types.
#'
#' @param x numeric matrix with rows = samples and columns = cell types
#' Must have at least two rows and two columns.
#' @param cor.method Method to use for correlation:
#' c("pearson", "kendall", "spearman"). By default pearson.
#' @param insig character, specialized insignificant correlation coefficients,
#' "pch", "blank" (default). If "blank", wipe away the corresponding glyphs;
#' if "pch", add characters (see pch for details) on corresponding glyphs.
#' @param colors character vector with three colors indicating the lower, mid,
#' and high color. By default c("#6D9EC1", "white", "#E46726").
#' @param hc.order logical value. If TRUE, correlation matrix will be
#' hc.ordered using hclust function.
#' @param p.mat logical value. If TRUE (default), correlation significance
#' will be used. If FALSE arguments sig.level, insig, pch, pch.col,
#' pch.cex are invalid.
#' @param ... additional graphical parameters passed to \code{ggcorrplot}.
#'
#' @return \code{ggplot} object
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' set.seed(321)
#' x <- replicate(m <- 25, runif(10, 0, 1))
#' rownames(x) <- paste0("spot", seq_len(nrow(x)))
#' colnames(x) <- paste0("type", seq_len(ncol(x)))
#'
#' # The most basic example
#' plotCorrelationMatrix(x = x)
#'
#' # Showing the non-significant correlatinos
#' plotCorrelationMatrix(x = x, insig = "pch")
#'
#' # A more elaborated
#' plotCorrelationMatrix(
#' x = x,
#' hc.order = FALSE,
#' type = "lower",
#' outline.col = "lightgrey",
#' method = "circle",
#' colors = c("#64ccc9", "#b860bd", "#e3345d"))
#'
NULL
#' @rdname plotCorrelationMatrix
#' @importFrom Matrix colSums
#' @importFrom stats cor median
#' @import ggplot2
#' @export
plotCorrelationMatrix <- function(
x,
cor.method = c("pearson", "kendall", "spearman"),
insig = c("blank", "pch"),
colors = c("#6D9EC1", "white", "#E46726"),
hc.order = TRUE,
p.mat = TRUE,
...) {
# Check necessary packages are installed and if not STOP
.test_installed("ggcorrplot")
# If the following are left undefined select
# the first element of the vector
cor.method <- match.arg(cor.method)
insig <- match.arg(insig)
stopifnot(
is.matrix(x), is.numeric(x),
all(dim(x) > 0), ncol(x) > 1,
is.character(colors), length(colors) == 3,
is.logical(hc.order), length(hc.order) == 1,
is.logical(p.mat), length(p.mat) == 1)
# Remove columns that are all 0
x <- x[, colSums(x) > 0]
# return error if there are NAs in x
if (NA %in% x)
stop("There are NAs in x, please consider passing na.omit(x)",
" to the x argument")
corr <- cor(x)
# Compute correlation P-value
p.mat <- if (p.mat) {
ggcorrplot::cor_pmat(
x = x,
conf_int = 0.95,
method = cor.method)
}
# Plot correlation matrix as a heatmap
ggcorrplot::ggcorrplot(
corr = corr,
p.mat = p.mat,
hc.order = hc.order,
insig = insig,
lab = FALSE,
colors = colors,
...) +
theme(
plot.title = element_text(hjust = 0.5, face = "bold"),
axis.text.x = element_text(angle = 60, vjust = 1),
axis.text = element_text(vjust = 0.5))
}
================================================
FILE: R/plotImage.R
================================================
#' @rdname plotImage
#' @name plotImage
#' @title Plot JP(E)G/PNG/Raster/RGB images
#'
#' @description This function takes in an image-related object - path to
#' JP(E)G/PNG file, raster object, RGBarray. It returns a ggplot object with
#' the selected image.
#'
#' @param x A variety of objects can be passed: character string corresponding
#' to an image file path, valid file types are JPG, JPEG and PNG. It can also
#' take as input objects of class raster and RGB arrays. It can also take
#' a SpatialExperiment from which the image will be extracted.
#' @param slice Character string indicating which image slice to use when
#' SpatialExperiment objects are passed. By default uses the first
#' slice available.
#' @return \code{ggplot} object
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' # Filename
#' path <- file.path(
#' system.file(package = "SPOTlight"),
#' "extdata/SPOTlight.png")
#' plotImage(x = path)
#' # array
#' png_img <- png::readPNG(path)
#' plotImage(png_img)
#' # SpatialExperiment
NULL
#' @export
plotImage <- function(x, slice = NULL) {
# check validity of input arguments
stopifnot(
# Check for valid x classes
is.matrix(x) | is.character(x) | is.array(x) | is(x, "rastergrob") |
is(x, "SpatialExperiment"),
# Check for valid slice classes
is.null(slice) | is.character(slice))
if (!is.array(x))
x <- .extract_image(x)
# Plot image
plt <- .plot_image(x)
}
================================================
FILE: R/plotInteractions.R
================================================
#' @rdname plotInteractions
#' @title Plot group interactions
#'
#' @aliases plotHeatmap plotNetwork
#'
#' @description This function takes in a matrix with the predicted proportions
#' for each spot and returns a heatmap \code{which = plotHeatmap} or a network
#' graph \code{which = plotNetwork} to show which cells are interacting
#' spatially.
#'
#' @param x numeric matrix with rows = samples and columns = groups.
#' Must have at least one row and column, and at least two columns.
#' @param which character string specifying the type of
#' visualization: one of "heatmap" or "network".
#' @param min_prop scalar specifying the value above which
#' a group is considered to be contributing to a given sample.
#' An interaction between groups i and j is counted for sample s
#' only when both x[s, i] and x[s, j] fall above \code{min_prop}.
#' @param metric character string specifying which metric to show:
#' one of "prop" or "jaccard".
#' @param ... additional graphical parameters passed
#' to \code{plot.igraph} when \code{which = "network"}
#' (see \code{?igraph.plotting}).
#'
#' @return base R plot
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' library(ggplot2)
#' mat <- replicate(8, rnorm(100, runif(1, -1, 1)))
#' # Basic example
#' plotInteractions(mat)
#'
#' ### heatmap ###
#' # This returns a ggplot object that can be modified as such
#' plotInteractions(mat, which = "heatmap") +
#' scale_fill_gradient(low = "#f2e552", high = "#850000") +
#' labs(title = "Interaction heatmap", fill = "proportion")
#'
#' ### Network ###
#' # specify node names
#' nms <- letters[seq_len(ncol(mat))]
#' plotInteractions(mat, which = "network", vertex.label = nms)
#'
#' # or set column names instead
#' colnames(mat) <- nms
#' plotInteractions(mat, which = "network")
#'
#' # pass additional graphical parameters for aesthetics
#' plotInteractions(mat,
#' which = "network",
#' edge.color = "cyan",
#' vertex.color = "pink",
#' vertex.label.font = 2,
#' vertex.label.color = "maroon")
#' @export
plotInteractions <- function(x,
which = c("heatmap", "network"),
metric = c("prop", "jaccard"),
min_prop = 0, ...) {
# check validity of input arguments
which <- match.arg(which)
metric <- match.arg(metric)
stopifnot(
is.matrix(x), is.numeric(x),
all(dim(x) > 0), ncol(x) > 1,
is.numeric(min_prop), length(min_prop) == 1)
# get interactions table
if (is.null(colnames(x))) {
colnames(x) <- seq_len(ncol(x))
}
df <- .count_interactions(x, min_prop)
df <- .statistics_interaction(x, df)
switch(which,
heatmap = .plot_heatmap(x, df, metric),
network = .plot_network(x, df, metric, ...))
}
#' @importFrom sparseMatrixStats rowAlls
.count_interactions <- function(x, min_prop) {
# for each pair of groups count how many
# samples have value above 'min_prop'
x <- x > min_prop
ij <- utils::combn(colnames(x), 2)
y <- apply(ij, 2, function(.) sum(rowAlls(x[, ., drop = FALSE])))
# construct 'data.frame'
df <- data.frame(t(ij), y)
names(df) <- c("from", "to", "n")
# assure are properly ordered
y <- colnames(x)
df$i <- factor(df$from, y)
df$j <- factor(df$to, rev(y))
return(df)
}
.statistics_interaction <- function(x, df) {
# compute proportion of samples that have all groups
y <- colnames(x)
t <- colSums(x > 0)
i <- match(df$from, y)
j <- match(df$to, y)
df$ti <- t[i]
df$tj <- t[j]
df$pi <- df$n / df$ti
df$pj <- df$n / df$tj
# As suggested by @astrid12345
# https://github.com/MarcElosua/SPOTlight/issues/42
df$jaccard <- df$n / (df$ti + df$tj - df$n)
return(df)
}
#' @import ggplot2
#' @importFrom Matrix colSums
.plot_heatmap <- function(x, df, metric) {
# Initialize ggplot
p <- ggplot(df)
# Add pertinent layers
if (metric == "prop") {
# Add tile layers
p <- p + geom_tile(aes(.data$i, .data$j, fill = .data$pi)) +
geom_tile(aes(.data$j, .data$i, fill = .data$pj))
} else if (metric == "jaccard") {
# Add tile layers - Jaccard
p <- p + geom_tile(aes(.data$i, .data$j, fill = .data$jaccard))
}
# Prettify the plot :)
p +
scale_fill_viridis_c("proportion", limits = c(0, NA)) +
scale_y_discrete(limits = function(.) rev(.)) +
coord_fixed(expand = FALSE) +
labs(x = "From", y = "To", fill = "Proportion") +
theme_linedraw() +
theme(
panel.grid = element_blank(),
axis.text.x = element_text(angle = 45, hjust = 1))
}
.plot_network <- function(x, df, metric, ...) {
# Check necessary packages are installed and if not STOP
.test_installed("igraph")
w <- switch(metric,
prop = scale(df[, "n"], 1),
jaccard = df[, "jaccard"])
g <- igraph::graph_from_data_frame(df,
vertices = colnames(x),
directed = FALSE)
igraph::plot.igraph(g, edge.width = w, ...)
}
================================================
FILE: R/plotSpatialScatterpie.R
================================================
#' @rdname plotSpatialScatterpie
#' @name plotSpatialScatterpie
#' @title Spatial scatterpie
#'
#' @description This function takes in the coordinates of the spots and the
#' proportions of the cell types within each spot. It returns a plot where
#' each spot is a piechart showing proportions of the cell type composition.
#'
#' @param x Object containing the spots coordinates, it can be an object of class
#' SpatialExperiment, dataframe or matrix. For the latter two
#' rownames should have the spot barcodes to match x. If a matrix it has to
#' of dimensions nrow(y) x 2 where the columns are the x and y coordinates
#' in that order.
#' @param y Matrix or dataframe containing the deconvoluted spots. rownames
#' need to be the spot barcodes to match to x.
#' @param img Logical TRUE or FALSE indicating whether to plot the image or not.
#' Objects of classes accepted by \code{plotImage} can also be passed and
#' that image will be used. By default FALSE.
#' @param slice Character string indicating which slice to plot if img is TRUE.
#' By default uses the first image.
#' @param cell_types Vector of cell type names to plot. By default uses the
#' column names of y.
#' @param scatterpie_alpha Numeric scalar to set the alpha of the pie charts.
#' By default 1.
#' @param pie_scale Numeric scalar to set the size of the pie charts.
#' By default 0.4.
#' @param degrees From SpatialExperiment rotateImg. For clockwise (degrees > 0)
#' and counter-clockwise (degrees < 0) rotation. By default NULL.
#' @param axis From SpatialExperiment mirrorImg. When a SpatialExperiment object
#' is passed as the image return the mirror image. For horizontal (axis = "h")
#' and vertical (axis = "v") mirroring. By default NULL.
#' @param ... additional parameters to geom_scatterpie
#' @return \code{ggplot} object
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' set.seed(321)
#'
#' # Coordinates
#' x <- replicate(2, rnorm(100))
#' rownames(x) <- paste0("spot", seq_len(nrow(x)))
#' colnames(x) <- c("imagecol", "imagerow")
#'
#' # Proportions
#' y <- replicate(m <- 5, runif(nrow(x), 0, 1))
#' y <- prop.table(y, 1)
#'
#' rownames(y) <- paste0("spot", seq_len(nrow(y)))
#' colnames(y) <- paste0("type", seq_len(ncol(y)))
#'
#' (plt <- plotSpatialScatterpie(x = x, y = y))
NULL
#' @rdname plotSpatialScatterpie
#' @import ggplot2
#' @export
plotSpatialScatterpie <- function(
x,
y,
cell_types = colnames(y),
img = FALSE,
slice = NULL,
scatterpie_alpha = 1,
pie_scale = 0.4,
degrees = NULL,
axis = NULL,
...) {
# Check necessary packages are installed and if not STOP
.test_installed("scatterpie")
# Class checks
stopifnot(
# Check x inputs
is.matrix(x) | is.data.frame(x) |
is(x, "SpatialExperiment"),
# Check y inputs
is.matrix(y) | is.data.frame(y),
# cell_types needs to be a character with max length = ncol(y)
is.character(cell_types) & length(cell_types) <= ncol(y),
# Check img
# img not checked since its checked in plotImage()
# Check slice name
is.character(slice) | is.null(slice),
# Check plotting parameters are numeric
is.numeric(scatterpie_alpha),
is.numeric(pie_scale),
is.numeric(degrees) | is.null(degrees),
axis %in% c("h", "v") | is.null(axis)
)
# If image is passed add it as the base layer, if not, no image
# Need to use isFALSE bc img can have many different inputs
# Set ymax to overlap image and piecharts
if (isFALSE(img)) {
p <- ggplot() +
coord_fixed()
ymax <- 0
} else {
# Extract image from SE objects when img is TRUE
# If image is not TRUE and not FALSE an acceptable class for plotImage
# has been passed
if (is(x, "SpatialExperiment") & isTRUE(img)) {
img <- .extract_image(x, slice)
# Rotate or mirror image if dots don't overlay properly
if (is(x, "SpatialExperiment")) {
.test_installed("SpatialExperiment")
## Rotate image if needed
if (!is.null(degrees)) {
.test_installed("grDevices")
img <- SpatialExperiment::SpatialImage(
grDevices::as.raster(img))
img <- as(img, "LoadedSpatialImage")
img <- SpatialExperiment::rotateImg(img, degrees = degrees)
img <- grDevices::as.raster(img)
}
## Make mirror image if necessary
if (!is.null(axis)) {
.test_installed("grDevices")
img <- SpatialExperiment::SpatialImage(
grDevices::as.raster(img))
img <- as(img, "LoadedSpatialImage")
img <- SpatialExperiment::mirrorImg(img, axis = axis)
img <- grDevices::as.raster(img)
}
}
}
p <- plotImage(x = img)
ymax <- max(p$coordinates$limits$y)
}
# Extract coordinate matrix from x
if (!is.matrix(x))
x <- .extract_coord(x = x, slice = slice, img = img)
# Check colnames
x <- .x_cnames(x)
# Convert y to matrix format
if (!is.matrix(x)) {
y <- as.matrix(x)
}
# Stop if x and y don't have the same number of columns or if the
# rownames are not common between them
stopifnot(
nrow(x) == nrow(y),
all(rownames(x) %in% rownames(y)))
# merge by row names (by=0 or by="row.names")
df <- merge(x, y, by = 0, all = TRUE)
# make y negative
df$coord_y_i <- abs(df$coord_y - ymax)
# Plot
p + scatterpie::geom_scatterpie(
data = df,
aes(
x = .data[["coord_x"]],
y = .data[["coord_y_i"]]
),
cols = cell_types,
color = NA,
alpha = scatterpie_alpha,
pie_scale = pie_scale,
...) +
# Below not needed bc comes from plotImage
# coord_fixed() +
theme_void() +
theme(legend.key.size = unit(0.5, "lines"))
}
.x_cnames <- function(x) {
# If the column names of x aren't right fix them
cnames <- c("coord_y", "coord_x")
if (!all(colnames(x) %in% cnames)) {
colnames(x) <- cnames
}
x
}
# Coordinates and return a matrix object where each row is a spot and the
# columns are the x and y coordinates
.extract_coord <- function(x, slice, img) {
# Iterate over all the accepted classes and return spot coordinates
if (is.data.frame(x)) {
# Convert to matrix
x <- as.matrix(x)
} else if (is(x, "SpatialExperiment")) {
.test_installed(c("SpatialExperiment"))
# Stop if there are no images or the name selected doesn't exist
stopifnot(
# Stop if there are no images
!is.null(SpatialExperiment::getImg(x)),
# Stop if the image doesn't exist
slice %in% SpatialExperiment::imgData(x)[1, "sample_id"],
# Return error if there are no colnames in the object
!is.null(colnames(x))
)
# If slice is null use the first slice
img_df <- SpatialExperiment::imgData(x)
if (is.null(slice))
slice <- img_df[1, "sample_id"]
# Scale factor to scale the coordinates
sf <- img_df[img_df$sample_id == slice, "scaleFactor"]
## Extract spot barcodes
barcodes <- colnames(x)
## Extract spatial coordinates
# coord_df <- SpatialExperiment::spatialCoords(x)
x <- as.matrix(SpatialExperiment::spatialCoords(x)[, c(1, 2)])
## Scale coordinates
x <- x * sf
## Add barcodes to coord matrix & change colnames
rownames(x) <- barcodes
} else {
stop("Couldn't extract image coordinates.
Please check class(x) is SpatialExperiment,
dataframe or matrix")
}
return(x)
}
================================================
FILE: R/plotTopicProfiles.R
================================================
#' @rdname plotTopicProfiles
#' @name plotTopicProfiles
#' @title Plot NMF topic profiles
#'
#' @description This function takes in the fitted NMF model and returns the
#' topic profiles learned for each cell \code{facet = FALSE} or cell type
#' \code{facet = TRUE}. Ideal training will return all the cell from the same
#' cell type to share a unique topic profile.
#'
#' @param x \code{list} object obtained from \code{SPOTlight}.
#' @param y vector of group labels. Should be of length
#' \code{ncol(res_lvl1$NMF$h)}.
#' @param facet logical indicating whether to stratify by group.
#' If \code{FALSE} (default), weights will be the median across cells
#' for each group (point = topic weight for a given cell type).
#' If \code{TRUE}, cell-specific weights will be shown
#' (point = topic weight of a given cell).
#' @param min_prop scalar in [0,1]. When \code{facet = TRUE},
#' only cells with a weight > \code{min_prop} will be included.
#' @param ncol integer scalar specifying the number of facet columns.
#'
#' @return \code{ggplot} object
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' library(ggplot2)
#' x <- mockSC()
#' y <- mockSP(x)
#' z <- getMGS(x)
#'
#' res <- SPOTlight(x, y,
#' groups = x$type,
#' mgs = z,
#' group_id = "type",
#' verbose = FALSE)
#'
#' plotTopicProfiles(res[[3]], x$type, facet = TRUE)
#' plotTopicProfiles(res[[3]], x$type, facet = FALSE)
NULL
#' @rdname plotTopicProfiles
#' @importFrom stats aggregate median
#' @import ggplot2
#' @export
plotTopicProfiles <- function(
x,
y,
facet = FALSE,
min_prop = 0.01,
ncol = NULL) {
# Convert y to character
y <- as.character(y)
# check validity of input arguments
stopifnot(
is(x, "list"),
all(sort(names(x)) == sort(c("w", "d", "h"))),
is.character(y),
length(y) == ncol(x$h),
setequal(
colnames(x$w), paste0("topic_", seq_len(length(unique(y))))
),
is.logical(facet), length(facet) == 1,
is.numeric(min_prop), length(min_prop) == 1,
is.null(ncol) | (is.numeric(ncol) & length(ncol) == 1))
# get proportion of topic contribution by cell
mat <- prop.table(t(x$h), 1)
df <- data.frame(
id = seq_len(nrow(mat)),
weight = c(mat),
group = rep(y, ncol(mat)),
topic = rep(seq_len(ncol(mat)), each = nrow(mat)))
if (facet) {
# drop cells with 'weight < min_prop'
df <- df[df$weight >= min_prop, ]
# set aesthetics
x <- "id"
f <- facet_wrap(~group, ncol = ncol, scales = "free_x")
} else {
# get topic medians
df <- aggregate(weight ~ group + topic, data = df, FUN = median)
# set aesthetics
x <- "group"
f <- NULL
}
# fix topic order
df$topic <- factor(df$topic, seq_along(unique(y)))
# render plot
ggplot(df, aes(
.data[[x]], .data$topic, col = .data$weight, size = .data$weight)) +
f + geom_point() +
guides(col = guide_legend(override.aes = list(size = 2))) +
scale_size_continuous(range = c(0, 3)) +
scale_color_continuous(low = "lightgrey", high = "#3d2bff") +
xlab(if (facet) x) +
theme_bw() +
theme(
panel.grid = element_blank(),
legend.key.size = unit(0.5, "lines"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 45, hjust = 1))
}
================================================
FILE: R/runDeconvolution.R
================================================
#' @name runDeconvolution
#' @rdname runDeconvolution
#' @title Run Deconvolution using NNLS model
#'
#' @aliases runDeconvolution
#'
#' @description This function takes in the mixture data, the trained model & the
#' topic profiles and returns the proportion of each cell type within each
#' mixture
#'
#' @param x mixture dataset. Can be a numeric matrix,
#' \code{SingleCellExperiment} or \code{SpatialExperiment}
#' @param mod object as obtained from trainNMF.
#' @param ref object of class matrix containing the topic profiles for each cell
#' type as obtained from trainNMF.
#' @param slot If the object is of class \code{SpatialExperiment} indicates
#' matrix to use. By default "counts".
#' @inheritParams SPOTlight
#'
#' @return base a list where the first element is a list giving the NMF model and
#' the second is a matrix containing the topic profiles learnt.
#'
#' @author Marc Elosua Bayes, Zach DeBruine, and Helena L Crowell
#'
#' @examples
#' set.seed(321)
#' # mock up some single-cell, mixture & marker data
#' sce <- mockSC(ng = 200, nc = 10, nt = 3)
#' spe <- mockSP(sce)
#' mgs <- getMGS(sce)
#'
#' res <- trainNMF(
#' x = sce,
#' y = rownames(spe),
#' groups = sce$type,
#' mgs = mgs,
#' weight_id = "weight",
#' group_id = "type",
#' gene_id = "gene")
#' # Run deconvolution
#' decon <- runDeconvolution(
#' x = spe,
#' mod = res[["mod"]],
#' ref = res[["topic"]])
NULL
#' @rdname runDeconvolution
#' @importFrom Matrix colSums
#' @export
runDeconvolution <- function(
x,
mod,
ref,
scale = TRUE,
min_prop = 0.01,
verbose = TRUE,
slot = "counts",
L1_nnls_topics = 0,
L2_nnls_topics = 0,
L1_nnls_prop = 0,
L2_nnls_prop = 0,
threads = 0,
...) {
# Class checks
stopifnot(
# Check x inputs
is.matrix(x) | is(x, "DelayedMatrix") | is(x, "dgCMatrix") |
is(x, "SingleCellExperiment") |
is(x, "SpatialExperiment"),
# Check mod inputs
is.list(mod),
# check ref
is.matrix(ref),
# Check slot name
is.character(slot), length(slot) == 1,
# Check scale and verbose
is.logical(scale), length(scale) == 1,
is.logical(verbose), length(verbose) == 1,
# Check min_prop numeric
is.numeric(min_prop), length(min_prop) == 1,
min_prop >= 0, min_prop <= 1
)
# Extract expression matrix
if (!is.matrix(x))
x <- .extract_counts(x, slot)
# Get topic profiles for mixtures
mat <- .pred_hp(
x = x, mod = mod, scale = scale, verbose = verbose,
L1_nnls = L1_nnls_topics, L2_nnls = L2_nnls_topics, threads = threads)
if (verbose) message("Deconvoluting mixture data...")
# Need to scale because the matrix is also scaled to 1 with the RCPP
# approach to speed it up
ref_scale <- t(t(ref) / colSums(ref))
# Check if there is a column with all NAs after scaling -
# happens when whole column is 0s
ref_na <- is.na(ref_scale)
if (sum(ref_na) > 1)
# Set topics with NAs as all 0s
ref_scale[, which(colSums(ref_na) == nrow(ref_na))] <- 0
# The below predict_nmf function does the equivalent to
# pred <- t(mat) %*% t(ref_scale)
pred <- predict_nmf(
A_ = as(mat, "dgCMatrix"),
w = ref_scale,
L1 = L1_nnls_prop,
L2 = L2_nnls_prop,
threads = threads)
rownames(pred) <- rownames(ref_scale)
colnames(pred) <- colnames(mat)
# Proportions within each spot
res <- prop.table(pred, 2)
# 1- t(ref_scale) %*% pred map pred to mat using ref_scale
# 2- Check the differences between the original and re-mapped matrix
# 3- sum the errors for each spot (column)
# t(ref_scale) is a topic x celltype matrix
# pred is a celltype x spot matrix
# mat is a topic x spot matrix
err_mat <- (mat - ref_scale %*% pred)^2
err <- colSums(err_mat) / colSums(mat)^2
# names(err) <- colnames(res)
return(list("mat" = t(res), "res_ss" = err))
}
================================================
FILE: R/trainNMF.R
================================================
#' @name trainNMF
#' @rdname trainNMF
#' @title train NMF model
#'
#' @aliases trainNMF
#'
#' @description This is the training function used by SPOTLight. This function
#' takes in single cell expression data, trains the model and learns topic
#' profiles for each cell type
#'
#' @param x single-cell dataset. Can be a numeric matrix, Can be a
#' numeric matrix or \code{SingleCellExperiment}.
#' @param y Null if you want to train the model with all the genes in the SC
#' data or a character vector with the rownames of the mixture dataset to
#' subset the gene set used to the intersection between them.
#' @param slot_sc If the object is of class \code{SingleCellExperiment}
#' indicates matrix to use. By default "counts".
#' @inheritParams SPOTlight
#'
#' @return a list where the first element is a list with the NMF model
#' information and the second is a matrix containing the topic profiles
#' learnt per cell type.
#'
#' @author Marc Elosua Bayes & Helena L Crowell
#'
#' @examples
#' set.seed(321)
#' # mock up some single-cell, mixture & marker data
#' sce <- mockSC(ng = 200, nc = 10, nt = 3)
#' spe <- mockSP(sce)
#' mgs <- getMGS(sce)
#'
#' res <- trainNMF(
#' x = sce,
#' y = rownames(spe),
#' groups = sce$type,
#' mgs = mgs,
#' weight_id = "weight",
#' group_id = "type",
#' gene_id = "gene")
#' # Get NMF model
#' res[["mod"]]
#' # Get topic profiles
#' res[["topic"]]
NULL
#' @rdname trainNMF
# Key here to load t & Matrix so sparse matrices can be transposed
#' @importFrom Matrix Matrix t
#' @export
trainNMF <- function(
x,
y = NULL,
groups = NULL,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight",
hvg = NULL,
scale = TRUE,
verbose = TRUE,
L1_nmf = 0,
L2_nmf = 0,
tol = 1e-05,
maxit = 100,
threads = 0,
slot_sc = "counts",
...) {
if (is.null(n_top))
n_top <- max(table(mgs[[group_id]]))
ids <- c(gene_id, group_id, weight_id)
# convert mgs to dataframe if it is not already
if (!is.data.frame(mgs)) {
# check.names=FALSE to ensure the ids specified by the user are unchanged
mgs <- data.frame(mgs, check.names = FALSE)
}
stopifnot(
is.numeric(x) | is(x, "dgCMatrix") |
is(x, "SingleCellExperiment") |
is(x, "DelayedMatrix"),
(is.vector(y) & is.character(y)) | is.null(y),
is.character(ids), length(ids) == 3, ids %in% names(mgs),
is.null(groups) | length(groups) == ncol(x),
is.logical(scale), length(scale) == 1,
is.logical(verbose), length(verbose) == 1,
is.numeric(L1_nmf), length(L1_nmf) == 1,
is.numeric(L2_nmf), length(L2_nmf) == 1,
is.numeric(tol), length(tol) == 1)
# Set groups if x is SCE or SE and groups is NULL
if (is.null(groups))
groups <- .set_groups_if_null(x)
groups <- as.character(groups)
# Check mgs is a dataframe or conver it to a df
if (!is.data.frame(mgs)) {
if (is(mgs, "tibble") || is(mgs, "list")) {
mgs <- as.data.frame(mgs)
} else stop("'mgs' should be a 'data.frame'")
}
# Stop if at least one of the groups doesn't have marker genes
stopifnot(groups %in% mgs[[group_id]])
# Extract expression matrices for x and y
if (!is.matrix(x) & !is(x, "dgCMatrix"))
x <- .extract_counts(x, slot_sc)
# Make sure matrix is sparse
# convert matrix to dgCMatrix,
# if it is already then nothing is done
x <- as(x, "dgCMatrix")
# Set y no rownames X if NULL
if (is.null(y))
y <- rownames(x)
# select genes in mgs or hvg
if (!is.null(hvg)) {
# Select union of genes between markers and HVG
mod_genes <- union(unique(mgs[[gene_id]]), hvg)
} else {
# Select genes from the marker genes only
mod_genes <- unique(mgs[[gene_id]])
}
# Select intersection between interest and present in x (sce) & y (spe)
mod_genes <- intersect(mod_genes, intersect(rownames(x), y))
# drop features that are undetected in single-cell and/or mixture data
x <- .filter(x[mod_genes, ], y)
mgs <- mgs[mgs[[gene_id]] %in% rownames(x), ]
# scale to unit variance (optional)
if (scale) {
if (verbose) message("Scaling count matrix")
x <- .scale_uv(x)
}
# capture start time
t0 <- Sys.time()
# set model rank to number of groups
rank <- length(unique(groups))
# Get seeding matrices
if (verbose) message("Seeding NMF model...")
hw <- .init_nmf(x, groups, mgs, n_top, gene_id, group_id, weight_id)
# w_init <- .init_nmf(x, groups, mgs, n_top, gene_id, group_id, weight_id)
if (verbose) message("Training NMF model...")
# call to C++ routine
mod <- run_nmf(x, t(x), tol, maxit, verbose, L1_nmf, L2_nmf, threads, t(hw$W))
# Change nmfX to topic_X
colnames(mod$w) <- paste0("topic_", seq_len(ncol(mod$w)))
rownames(mod$h) <- paste0("topic_", seq_len(nrow(mod$h)))
rownames(mod$w) <- rownames(x)
colnames(mod$h) <- colnames(x)
# capture stop time
t1 <- Sys.time()
# print runtimes
if (verbose) {
dt <- round(difftime(t1, t0, units = "mins"), 2)
message("Time for training: ", dt, "min")
}
# Extract NMFfit to list for consistency with RcppML
# mod <- .extract_nmf(mod, hw$W)
# get topic profiles per cell type
topic <- .topic_profiles(mod, groups)
return(list("mod" = mod, "topic" = topic))
}
================================================
FILE: R/utils.R
================================================
#' @importFrom sparseMatrixStats rowSds
.scale_uv <- function(x) {
sds <- rowSds(x, na.rm = TRUE)
# TODO find a more efficient way of scaling the matrix
# t1 <- t(scale(t(x), center = FALSE, scale = sds))
# Scale by gene (each row by its sd) for unit variance
t1 <- x / sds
t1
}
#' @importFrom Matrix Matrix
.init_nmf <- function(x,
groups,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight") {
# check validity of input arguments
if (is.null(n_top)) {
n_top <- max(table(mgs[[group_id]]))
}
stopifnot(
is.character(gene_id), length(gene_id) == 1,
is.character(group_id), length(group_id) == 1,
is.character(weight_id), length(weight_id) == 1,
c(gene_id, group_id, weight_id) %in% names(mgs),
is.numeric(n_top), length(n_top) == 1, round(n_top) == n_top)
ng <- nrow(x)
nc <- ncol(x)
names(ks) <- ks <- unique(groups)
# subset 'n_top' features
mgs <- split(mgs, mgs[[group_id]])
mgs <- lapply(mgs, function(df) {
o <- order(df[[weight_id]], decreasing = TRUE)
n <- ifelse(nrow(df) < n_top, nrow(df), n_top)
df[o, ][seq_len(n), ]
})
# subset unique features
# mgs <- lapply(ks, function(k) {
# g1 <- mgs[[k]][[gene_id]]
# g2 <- unlist(lapply(mgs[ks != k], `[[`, gene_id))
# mgs[[k]][!g1 %in% g2, , drop = FALSE]
# })
# W is of dimension (#groups)x(#features) with W(i,j)
# equal to weight if j is marker for i, and ~0 otherwise
W <- vapply(ks, function(k) {
w <- numeric(ng) + 1e-12
names(w) <- rownames(x)
ws <- mgs[[k]][[weight_id]]
w[mgs[[k]][[gene_id]]] <- ws
return(w)
}, numeric(ng))
# H is of dimension (#groups)x(#samples) with H(i,j)
# equal to 1 if j is in i, and ~0 otherwise
cs <- split(seq_len(nc), groups)
H <- t(vapply(ks, function(k) {
h <- numeric(nc) + 1e-12
h[cs[[k]]] <- 1
return(h)
}, numeric(nc)))
tp <- paste0("topic_", seq_len(length(ks)))
dimnames(W) <- list(rownames(x), tp)
dimnames(H) <- list(tp, colnames(x))
return(list("W" = W, "H" = H))
}
#' .init_nmf <- function(x,
#' groups,
#' mgs,
#' n_top = NULL,
#' gene_id = "gene",
#' group_id = "cluster",
#' weight_id = "weight") {
#' # check validity of input arguments
#' if (is.null(n_top)) {
#' n_top <- max(table(mgs[[group_id]]))
#' }
#' stopifnot(
#' is.character(gene_id), length(gene_id) == 1,
#' is.character(group_id), length(group_id) == 1,
#' is.character(weight_id), length(weight_id) == 1,
#' c(gene_id, group_id, weight_id) %in% names(mgs),
#' is.numeric(n_top), length(n_top) == 1, round(n_top) == n_top)
#'
#' ng <- nrow(x)
#' nc <- ncol(x)
#' names(ks) <- ks <- unique(groups)
#'
#' # subset 'n_top' features
#' mgs <- split(mgs, mgs[[group_id]])
#' mgs <- lapply(mgs, function(df) {
#' o <- order(df[[weight_id]], decreasing = TRUE)
#' n <- ifelse(nrow(df) < n_top, nrow(df), n_top)
#' df[o, ][seq_len(n), ]
#' })
#'
#' # subset unique features
#' mgs <- lapply(ks, function(k) {
#' g1 <- mgs[[k]][[gene_id]]
#' g2 <- unlist(lapply(mgs[ks != k], `[[`, gene_id))
#' mgs[[k]][!g1 %in% g2, , drop = FALSE]
#' })
#'
#' # W is of dimension (#groups)x(#features) with W(i,j)
#' # equal to weight if j is marker for i, and ~0 otherwise
#' W <- vapply(ks, function(k) {
#' w <- numeric(ng) + 1e-12
#' names(w) <- rownames(x)
#' ws <- mgs[[k]][[weight_id]]
#' w[mgs[[k]][[gene_id]]] <- ws
#' return(w)
#' }, numeric(ng))
#'
#' # there is no need to initialize H
#' tp <- paste0("topic_", seq_len(length(ks)))
#' dimnames(W) <- list(rownames(x), tp)
#' return(W)
#' }
#' Filter features from expression matrix
#'
#' Remove undetected features and optionally keep only shared features
#' between the expression matrix and a reference set of features.
#'
#' @param x Expression matrix to filter
#' @param y Vector of feature names to keep (optional)
#'
#' @return Filtered expression matrix
#'
#' @details This function:
#' \itemize{
#' \item Removes features with zero expression across all samples
#' \item Optionally filters to keep only features present in both datasets
#' \item Ensures a minimum of 10 features remain after filtering
#' }
#'
#' @importFrom Matrix Matrix rowSums
.filter <- function(x, y) {
# remove undetected features
.fil <- function(.) {
i <- rowSums(.) > 0
.[i, , drop = FALSE]
}
x <- .fil(x)
# keep only shared features
if (!is.null(y))
x <- x[intersect(rownames(x), y), ]
if (nrow(x) < 10) {
stop(
"Insufficient number of features shared",
" between single-cell and mixture dataset.")
}
return(x)
}
#' @importFrom sparseMatrixStats colMedians
.topic_profiles <- function(mod, groups) {
# Treat mod differently if it comes from NMF or RcppML
df <- data.frame(t(mod$h))
dfs <- split(df, groups)
res <- vapply(
dfs, function(df)
colMedians(as.matrix(df)),
numeric(ncol(df))
)
rownames(res) <- paste0("topic_", seq_len(nrow(res)))
return(t(res))
}
#' @importFrom sparseMatrixStats rowSums2
.pred_hp <- function(
x, mod, scale = TRUE, verbose = TRUE,
L1_nnls = 0, L2_nnls = 0, threads = 0
) {
W <- mod$w
# remove all genes that are all 0s
g0 <- rowSums2(x) > 0
# Return a warning about genes being removed
if (!all(g0) & verbose)
message("Removing genes in mixture matrix that are all 0s")
x <- x[g0, ]
# Subset to shared genes between SP and SC
if (verbose)
message("Keep intersection of genes between W and mixture matrix")
gi <- intersect(rownames(W), rownames(x))
x <- x[gi, ]
W <- W[gi, ]
# Check there are enough shared features
if (nrow(x) < 10) {
stop(
"Insufficient number of features, <10, shared",
" between trained model and mixture dataset.")
}
if (scale) {
x <- .scale_uv(x)
}
# TODO sometimes this can predict all to 0 if not scaled
# If I do this we get the same since colSums(W) = 1 for all coummns
# Use a very very mild regularization at this step
# TODO revert back to native RCPP code works
y <- predict_nmf(as(x, "dgCMatrix"), t(W), L1_nnls, L2_nnls, threads)
# y <- RcppML::project(
# A = as(x, "dgCMatrix"),
# w = W,
# L1 = L1_nnls,
# nonneg = TRUE)
# TODO set up a test to deal when a column in y is all 0s, meaning all the topics are 0 for that cell type
# Assign names
rownames(y) <- rownames(mod$h)
colnames(y) <- colnames(x)
return(y)
}
# Test if a package is installed
# x is a stringr or vector of strings of packages names
# to test if they are installed
.test_installed <- function(x) {
# Check which packages aren't installed
t <- vapply(x, function(i)
isFALSE(requireNamespace(i, quietly = TRUE)), numeric(1))
x <- x[t == 1]
if (length(x) > 0) {
x <- paste(x, collapse = ", ")
stop("Please install package/s: ", x)
}
}
# Helper function to substitute the S4 method.
# This function takes in an object of class accepted in SPOTlight, it
# extracts the count/expression matrix specified and returns a matrix
.extract_counts <- function(x, slot) {
# Iterate over all the accepted classes and return expression matrix
# Extract count matrix from object
if (is(x, "SpatialExperiment") | is(x, "SingleCellExperiment")) {
.test_installed(c("SummarizedExperiment"))
# Stop if there are no images or the name selected doesn't exist
stopifnot(
# Stop if there are no images
!is.null(SummarizedExperiment::assayNames(x)),
# Stop if the image doesn't exist
slot %in% SummarizedExperiment::assayNames(x),
# Return error if there are no colnames in the object
!is.null(colnames(x))
)
## Extract SCE-SE coordinates
x <- SummarizedExperiment::assay(x, slot)
}
# Process expression matrix
if (is(x, "DelayedMatrix")) {
# Convert to matrix
rn <- rownames(x)
cn <- colnames(x)
x <- Matrix(x, sparse = TRUE, nrow = nrow(x), ncol = ncol(x))
rownames(x) <- rn
colnames(x) <- cn
} else if (is(x, "dgCMatrix") | is.matrix(x)) {
x
} else {
stop("Couldn't extract counts. Please check class(x) is a
SingleCellExpriment, SpatialExperiment, matrix, DelayedMatrix
or dgCMatrix.")
}
return(x)
}
# Take an array representing an image and plot it with ggplot2
#' @import ggplot2
#' @importFrom grid rasterGrob unit
.plot_image <- function(x) {
# Check necessary packages are installed and if not STOP
.test_installed(c("grid", "ggplot2"))
x <- grid::rasterGrob(x,
interpolate = FALSE,
width = grid::unit(1, "npc"),
height = grid::unit(1, "npc"))
ggplot() +
annotation_custom(
grob = x,
xmin = 0,
xmax = ncol(x$raster),
ymin = 0,
ymax = nrow(x$raster)) +
coord_fixed(
xlim = c(0, ncol(x$raster)),
ylim = c(0, nrow(x$raster))) +
theme_void()
# theme_classic()
}
# Extract image and convert it to array from allowed classes
.extract_image <- function(x, slice = NULL) {
# Iterate over all the accepted classes and convert the image to array
if (is.character(x)) {
.test_installed(c("jpeg", "png"))
# Check if the file exists
stopifnot(file.exists(x))
# Check the file is in the right format
typ <- c("jpg", "jpeg", "png")
pat <- paste0(".", typ, "$")
idx <- vapply(pat, grepl, x = x, logical(1), ignore.case = TRUE)
if (!any(idx)) {
stop("'x' should be of file type JPG, JPEG or PNG")
}
# Read file
x <- switch(typ[idx],
png = png::readPNG(x),
jpeg::readJPEG(x))
} else if (is(x, "SpatialExperiment")) {
.test_installed(c("SpatialExperiment"))
# Stop if there are no images or the name selected doesn't exist
stopifnot(
!is.null(SpatialExperiment::getImg(x)),
slice %in% SpatialExperiment::imgData(x)[1, "sample_id"]
)
# If image is null use the first slice
if (is.null(slice))
slice <- SpatialExperiment::imgData(x)[1, "sample_id"]
# Convert to raster
x <- SpatialExperiment::imgRaster(x, sample_id = slice)
x <- as.matrix(x)
} else {
stop("Couldn't extract image, See ?plotImage for valid image inputs.")
}
return(x)
}
# When assigning cells to groups in trainNMF and SPOTlight if groups is set to
# NULL use the cell identities/labels. If it is not a Seurat or SCE return error
#' @importFrom SingleCellExperiment colLabels
.set_groups_if_null <- function(x) {
## SCE ##
if (is(x, "SingleCellExperiment")) {
# Extract idents
idents <- SingleCellExperiment::colLabels(x)
if (is.null(idents)) {
stop("SingleCellExperiment::colLabels(x) is NULL")
} else {
warning("Grouping cells into celltypes
by SingleCellExperiment::colLabels(x)")
groups <- as.character(idents)
}
## other ##
} else {
stop("Parameter groups needs to be defined.")
}
groups
}
# Helper function to extract elements of interest from objects NMFfit
# (NMF package) and nmf (RcppML) and returns a list with relevant information
# consistent between both of them
# .extract_nmf <- function(mod, smtx) {
# if (is(mod, "NMFfit")) {
# mod <- list(
# "w" = NMF::basis(mod),
# "d" = NULL,
# "h" = NMF::coef(mod),
# "misc" = list(
# "tol" = NULL,
# "iter" = mod@extra$iteration,
# "runtime" = mod@runtime,
# "mse" = NULL,
# "w_init" = smtx)
# )
# } else if (is.list(mod)) {
# mod <- list(
# "w" = mod$w,
# "d" = mod$d,
# "h" = mod$h,
# "misc" = list(
# "tol" = NULL,
# "iter" = NULL,
# "runtime" = NULL,
# "mse" = NULL,
# "w_init" = NULL)
# )
# } else {
# stop("mod is neither an 'NMFfit' or 'nmf' object ")
# }
#
# return(mod)
# }
================================================
FILE: README.md
================================================
# Welcome to `SPOTlight` <img src="inst/extdata/SPOTlight.png" width="200" align="right"/>
### We are currently on the process of submitting SPOTlight to bioconductor and there have been some styling changes on this branch compared to previous releases. If you want to use the version we are currently submitting feel free to look at the updated vignette [here](https://github.com/MarcElosua/SPOTlight/blob/main/vignettes/SPOTlight_kidney.Rmd). If you want to keep using the previous versions, you can still find it in the [spotlight-0.1.7 branch](https://github.com/MarcElosua/SPOTlight/tree/spotlight-0.1.7) and follow the previous [vignette](https://marcelosua.github.io/SPOTlight/).
`SPOTlight` provides a tool that enables the deconvolution of mixtures of cells from a single-cell reference. Originally developed for 10X's Visium - spatial transcriptomics- technology, it can be used for all technologies that output mixtures of cells. It is compatible with Bioconductor's `SingleCellExperiment` and `SpatialExperiment` classes. Furthermore, the package also provides visualization tools to assess the results of the deconvolution. Briefly, `SPOTlight` is based on finding topic profile signatures, by means of an NMFreg model, for each cell type and then optimizing the cell types proportions to fit the mixture we want to deconvolute.
<img src="vignettes/schematic.png" width="600"/>
## Installation
``` r
install.packages("BiocManager")
BiocManager::install("SPOTlight")
# Or the devel version
BiocManager::install("SPOTlight", version = "devel")
```
Alternatively, you can install it from GitHub using the [devtools](https://github.com/hadley/devtools) package.
``` r
install.packages("devtools")
library(devtools)
install_github("https://github.com/MarcElosua/SPOTlight")
```
### References
- Elosua-Bayes M, Nieto P, Mereu E, Gut I, Heyn H (2021): *SPOTlight: seeded NMF regression to deconvolute spatial transcriptomics spots with single-cell transcriptomes*. **Nucleic Acids Res** 49(9):e50. <doi:10.1093/nar/gkab043>.
------------------------------------------------------------------------
### Contributors
SPOTlight was originally developed by [Marc Elosua Bayes](https://github.com/MarcElosua/) and has received substantial additional contributions from [Helena L. Crowell](https://github.com/HelenaLC) and [Zach DeBruine](https://github.com/zdebruine).
### Issues - Ideas?
`SPOTlight` is still under active development. We greatly welcome (and highly encourage!) all feedback, bug reports and suggestions for improvement [here](https://github.com/MarcElosua/SPOTlight/issues). **Please make sure to raise issues with a [reproducible example](https://www.tidyverse.org/help/) and the output of your `sessionInfo()`.**
================================================
FILE: man/SPOTlight.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SPOTlight.R
\name{SPOTlight}
\alias{SPOTlight}
\title{Deconvolution of mixture using single-cell data}
\usage{
SPOTlight(
x,
y,
groups = NULL,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight",
hvg = NULL,
scale = TRUE,
min_prop = 0.01,
verbose = TRUE,
slot_sc = "counts",
slot_sp = "counts",
L1_nmf = 0,
L2_nmf = 0,
maxit = 100,
threads = 0,
tol = 1e-05,
L1_nnls_topics = 0,
L2_nnls_topics = 0,
L1_nnls_prop = 0,
L2_nnls_prop = 0,
...
)
}
\arguments{
\item{x, y}{single-cell and mixture dataset, respectively. Can be a
numeric matrix or \code{SingleCellExperiment}..}
\item{groups}{character vector of group labels for cells in \code{x}.
When \code{x} is a \code{SingleCellExperiment}.,
defaults to \code{colLabels(x)} and \code{Idents(x)}, respectively.
Make sure groups is not a Factor.}
\item{mgs}{\code{data.frame} or \code{DataFrame} of marker genes.
Must contain columns holding gene identifiers, group labels and
the weight (e.g., logFC, -log(p-value) a feature has in a given group.}
\item{n_top}{integer scalar specifying the number of markers to select per
group. By default NULL uses all the marker genes to initialize the model.}
\item{gene_id, group_id, weight_id}{character specifying the column
in \code{mgs} containing gene identifiers, group labels and weights,
respectively.}
\item{hvg}{character vector containing hvg to include in the model.
By default NULL.}
\item{scale}{logical specifying whether to scale single-cell counts to unit
variance. This gives the user the option to normalize the data beforehand
as you see fit (CPM, FPKM, ...) when passing a matrix or specifying the
slot from where to extract the count data.}
\item{min_prop}{scalar in [0,1] setting the minimum contribution
expected from a cell type in \code{x} to observations in \code{y}.
By default 0.}
\item{verbose}{logical. Should information on progress be reported?}
\item{slot_sc, slot_sp}{If the object is of class \code{SingleCellExperiment}
indicates matrix to use. By default "counts".}
\item{L1_nmf}{LASSO penalty in the range (0, 1] for NMF,
larger values increase sparsity of each factor}
\item{L2_nmf}{RUDGE penalty >0 for NMF,
larger values increase angle between factors and thus sparsity.}
\item{maxit}{maximum number of NMF iterations for fitting}
\item{threads}{number of threads to use, default 0 (all threads)}
\item{tol}{tolerance of the NMF model at convergence, the Pearson correlation
distance between models across consecutive iterations (1e-5 is publication quality)}
\item{L1_nnls_topics, L1_nnls_prop}{LASSO penalty in the range (0, 1] for NNLS
when computing cell type topic profiles and cell type proportions
respectively. Larger values remove "noisy" contributions more aggressively.}
\item{L2_nnls_topics, L2_nnls_prop}{RIDGE penalty >0 for NNLS when computing
cell type topic profiles and cell type proportions respectively.
Larger values remove "noisy" contributions more aggressively.}
\item{...}{additional parameters.}
}
\value{
a numeric matrix with rows corresponding to samples
and columns to groups
}
\description{
This is the backbone function which takes in single cell
expression data to deconvolute spatial transcriptomics spots.
}
\details{
SPOTlight uses a Non-Negative Matrix Factorization approach to learn
which genes are important for each cell type. In order to drive the
factorization and give more importance to cell type marker genes we
previously compute them and use them to initialize the basis matrix. This
initialized matrices will then be used to carry out the factorization with
the single cell expression data. Once the model has learn the topic
profiles for each cell type we use non-negative least squares (NNLS) to
obtain the topic contributions to each spot. Lastly, NNLS is again used to
obtain the proportion of each cell type for each spot by finding the
fitting the single-cell topic profiles to the spots topic contributions.
}
\examples{
library(scater)
library(scran)
# Use Mock data
# Refer to the vignette for a full workflow
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
res <- SPOTlight(
x = counts(sce),
y = counts(spe),
groups = as.character(sce$type),
mgs = mgs,
hvg = NULL,
weight_id = "weight",
group_id = "type",
gene_id = "gene")
}
\author{
Marc Elosua Bayes, Zach DeBruine, and Helena L Crowell
}
================================================
FILE: man/data.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\name{data}
\alias{data}
\alias{mockSC}
\alias{mockSP}
\alias{getMGS}
\title{Synthetic single-cell, mixture and marker data}
\usage{
mockSC(ng = 200, nc = 50, nt = 3)
mockSP(x, ns = 100)
getMGS(x, n_top = 10)
}
\arguments{
\item{ng, nc, nt, ns}{integer scalar specifying the number
of genes, cells, types (groups) and spots to simulate.}
\item{x}{Single cell experiment object}
\item{n_top}{integer specifying the number of
marker genes to extract for each cluster.}
}
\value{
\itemize{
\item{\code{mockSC} returns a \code{SingleCellExperiment}
with rows = genes, columns = single cells, and cell metadata
(\code{colData}) column \code{type} containing group identifiers.}
\item{\code{mockSP} returns a \code{SingleCellExperiment}
with rows = genes, columns = single cells, and cell metadata
(\code{colData}) column \code{type} containing group identifiers.}
\item{\code{getMGS} returns a \code{data.frame} with \code{nt*n_top}
rows and 3 columns: gene and type (group) identifier, as well as the
gene's weight = the proportion of counts accounted for by that type.}
}
}
\description{
\code{mockSC/mockSP()} are designed to generate synthetic single-cell and
spatial mixture data. These data are not meant to represent biologically
meaningful use-cases, but are solely intended for use in examples, for
unit-testing, and to demonstrate \code{SPOTlight}'s general functionality.
Finally, \code{.get_mgs()} implements a statistically naive way to select
markers from single-cell data; again, please don't use it in real life.
}
\examples{
sce <- mockSC()
spe <- mockSP(sce)
mgs <- getMGS(sce)
}
================================================
FILE: man/dot-filter.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{.filter}
\alias{.filter}
\title{.init_nmf <- function(x,
groups,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight") {
# check validity of input arguments
if (is.null(n_top)) {
n_top <- max(table(mgs[[group_id]]))
}
stopifnot(
is.character(gene_id), length(gene_id) == 1,
is.character(group_id), length(group_id) == 1,
is.character(weight_id), length(weight_id) == 1,
c(gene_id, group_id, weight_id) %in% names(mgs),
is.numeric(n_top), length(n_top) == 1, round(n_top) == n_top)
ng <- nrow(x)
nc <- ncol(x)
names(ks) <- ks <- unique(groups)
# subset 'n_top' features
mgs <- split(mgs, mgs[[group_id]])
mgs <- lapply(mgs, function(df) {
o <- order(df[[weight_id]], decreasing = TRUE)
n <- ifelse(nrow(df) < n_top, nrow(df), n_top)
df[o, ][seq_len(n), ]
})
# subset unique features
mgs <- lapply(ks, function(k) {
g1 <- mgs[[k]][[gene_id]]
g2 <- unlist(lapply(mgs[ks != k], `[[`, gene_id))
mgs[[k]][!g1 %in% g2, , drop = FALSE]
})
# W is of dimension (#groups)x(#features) with W(i,j)
# equal to weight if j is marker for i, and ~0 otherwise
W <- vapply(ks, function(k) {
w <- numeric(ng) + 1e-12
names(w) <- rownames(x)
ws <- mgs[[k]][[weight_id]]
w[mgs[[k]][[gene_id]]] <- ws
return(w)
}, numeric(ng))
# there is no need to initialize H
tp <- paste0("topic_", seq_len(length(ks)))
dimnames(W) <- list(rownames(x), tp)
return(W)
}
Filter features from expression matrix}
\usage{
.filter(x, y)
}
\arguments{
\item{x}{Expression matrix to filter}
\item{y}{Vector of feature names to keep (optional)}
}
\value{
Filtered expression matrix
}
\description{
Remove undetected features and optionally keep only shared features
between the expression matrix and a reference set of features.
}
\details{
This function:
\itemize{
\item Removes features with zero expression across all samples
\item Optionally filters to keep only features present in both datasets
\item Ensures a minimum of 10 features remain after filtering
}
}
================================================
FILE: man/plotCorrelationMatrix.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotCorrelationMatrix.R
\name{plotCorrelationMatrix}
\alias{plotCorrelationMatrix}
\title{Plot Correlation Matrix}
\usage{
plotCorrelationMatrix(
x,
cor.method = c("pearson", "kendall", "spearman"),
insig = c("blank", "pch"),
colors = c("#6D9EC1", "white", "#E46726"),
hc.order = TRUE,
p.mat = TRUE,
...
)
}
\arguments{
\item{x}{numeric matrix with rows = samples and columns = cell types
Must have at least two rows and two columns.}
\item{cor.method}{Method to use for correlation:
c("pearson", "kendall", "spearman"). By default pearson.}
\item{insig}{character, specialized insignificant correlation coefficients,
"pch", "blank" (default). If "blank", wipe away the corresponding glyphs;
if "pch", add characters (see pch for details) on corresponding glyphs.}
\item{colors}{character vector with three colors indicating the lower, mid,
and high color. By default c("#6D9EC1", "white", "#E46726").}
\item{hc.order}{logical value. If TRUE, correlation matrix will be
hc.ordered using hclust function.}
\item{p.mat}{logical value. If TRUE (default), correlation significance
will be used. If FALSE arguments sig.level, insig, pch, pch.col,
pch.cex are invalid.}
\item{...}{additional graphical parameters passed to \code{ggcorrplot}.}
}
\value{
\code{ggplot} object
}
\description{
This function takes in a matrix with the predicted proportions
for each spot and returns a correlation matrix between cell types.
}
\examples{
set.seed(321)
x <- replicate(m <- 25, runif(10, 0, 1))
rownames(x) <- paste0("spot", seq_len(nrow(x)))
colnames(x) <- paste0("type", seq_len(ncol(x)))
# The most basic example
plotCorrelationMatrix(x = x)
# Showing the non-significant correlatinos
plotCorrelationMatrix(x = x, insig = "pch")
# A more elaborated
plotCorrelationMatrix(
x = x,
hc.order = FALSE,
type = "lower",
outline.col = "lightgrey",
method = "circle",
colors = c("#64ccc9", "#b860bd", "#e3345d"))
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: man/plotImage.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotImage.R
\name{plotImage}
\alias{plotImage}
\title{Plot JP(E)G/PNG/Raster/RGB images}
\arguments{
\item{x}{A variety of objects can be passed: character string corresponding
to an image file path, valid file types are JPG, JPEG and PNG. It can also
take as input objects of class raster and RGB arrays. It can also take
a SpatialExperiment from which the image will be extracted.}
\item{slice}{Character string indicating which image slice to use when
SpatialExperiment objects are passed. By default uses the first
slice available.}
}
\value{
\code{ggplot} object
}
\description{
This function takes in an image-related object - path to
JP(E)G/PNG file, raster object, RGBarray. It returns a ggplot object with
the selected image.
}
\examples{
# Filename
path <- file.path(
system.file(package = "SPOTlight"),
"extdata/SPOTlight.png")
plotImage(x = path)
# array
png_img <- png::readPNG(path)
plotImage(png_img)
# SpatialExperiment
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: man/plotInteractions.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotInteractions.R
\name{plotInteractions}
\alias{plotInteractions}
\alias{plotHeatmap}
\alias{plotNetwork}
\title{Plot group interactions}
\usage{
plotInteractions(
x,
which = c("heatmap", "network"),
metric = c("prop", "jaccard"),
min_prop = 0,
...
)
}
\arguments{
\item{x}{numeric matrix with rows = samples and columns = groups.
Must have at least one row and column, and at least two columns.}
\item{which}{character string specifying the type of
visualization: one of "heatmap" or "network".}
\item{metric}{character string specifying which metric to show:
one of "prop" or "jaccard".}
\item{min_prop}{scalar specifying the value above which
a group is considered to be contributing to a given sample.
An interaction between groups i and j is counted for sample s
only when both x[s, i] and x[s, j] fall above \code{min_prop}.}
\item{...}{additional graphical parameters passed
to \code{plot.igraph} when \code{which = "network"}
(see \code{?igraph.plotting}).}
}
\value{
base R plot
}
\description{
This function takes in a matrix with the predicted proportions
for each spot and returns a heatmap \code{which = plotHeatmap} or a network
graph \code{which = plotNetwork} to show which cells are interacting
spatially.
}
\examples{
library(ggplot2)
mat <- replicate(8, rnorm(100, runif(1, -1, 1)))
# Basic example
plotInteractions(mat)
### heatmap ###
# This returns a ggplot object that can be modified as such
plotInteractions(mat, which = "heatmap") +
scale_fill_gradient(low = "#f2e552", high = "#850000") +
labs(title = "Interaction heatmap", fill = "proportion")
### Network ###
# specify node names
nms <- letters[seq_len(ncol(mat))]
plotInteractions(mat, which = "network", vertex.label = nms)
# or set column names instead
colnames(mat) <- nms
plotInteractions(mat, which = "network")
# pass additional graphical parameters for aesthetics
plotInteractions(mat,
which = "network",
edge.color = "cyan",
vertex.color = "pink",
vertex.label.font = 2,
vertex.label.color = "maroon")
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: man/plotSpatialScatterpie.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotSpatialScatterpie.R
\name{plotSpatialScatterpie}
\alias{plotSpatialScatterpie}
\title{Spatial scatterpie}
\usage{
plotSpatialScatterpie(
x,
y,
cell_types = colnames(y),
img = FALSE,
slice = NULL,
scatterpie_alpha = 1,
pie_scale = 0.4,
degrees = NULL,
axis = NULL,
...
)
}
\arguments{
\item{x}{Object containing the spots coordinates, it can be an object of class
SpatialExperiment, dataframe or matrix. For the latter two
rownames should have the spot barcodes to match x. If a matrix it has to
of dimensions nrow(y) x 2 where the columns are the x and y coordinates
in that order.}
\item{y}{Matrix or dataframe containing the deconvoluted spots. rownames
need to be the spot barcodes to match to x.}
\item{cell_types}{Vector of cell type names to plot. By default uses the
column names of y.}
\item{img}{Logical TRUE or FALSE indicating whether to plot the image or not.
Objects of classes accepted by \code{plotImage} can also be passed and
that image will be used. By default FALSE.}
\item{slice}{Character string indicating which slice to plot if img is TRUE.
By default uses the first image.}
\item{scatterpie_alpha}{Numeric scalar to set the alpha of the pie charts.
By default 1.}
\item{pie_scale}{Numeric scalar to set the size of the pie charts.
By default 0.4.}
\item{degrees}{From SpatialExperiment rotateImg. For clockwise (degrees > 0)
and counter-clockwise (degrees < 0) rotation. By default NULL.}
\item{axis}{From SpatialExperiment mirrorImg. When a SpatialExperiment object
is passed as the image return the mirror image. For horizontal (axis = "h")
and vertical (axis = "v") mirroring. By default NULL.}
\item{...}{additional parameters to geom_scatterpie}
}
\value{
\code{ggplot} object
}
\description{
This function takes in the coordinates of the spots and the
proportions of the cell types within each spot. It returns a plot where
each spot is a piechart showing proportions of the cell type composition.
}
\examples{
set.seed(321)
# Coordinates
x <- replicate(2, rnorm(100))
rownames(x) <- paste0("spot", seq_len(nrow(x)))
colnames(x) <- c("imagecol", "imagerow")
# Proportions
y <- replicate(m <- 5, runif(nrow(x), 0, 1))
y <- prop.table(y, 1)
rownames(y) <- paste0("spot", seq_len(nrow(y)))
colnames(y) <- paste0("type", seq_len(ncol(y)))
(plt <- plotSpatialScatterpie(x = x, y = y))
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: man/plotTopicProfiles.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotTopicProfiles.R
\name{plotTopicProfiles}
\alias{plotTopicProfiles}
\title{Plot NMF topic profiles}
\usage{
plotTopicProfiles(x, y, facet = FALSE, min_prop = 0.01, ncol = NULL)
}
\arguments{
\item{x}{\code{list} object obtained from \code{SPOTlight}.}
\item{y}{vector of group labels. Should be of length
\code{ncol(res_lvl1$NMF$h)}.}
\item{facet}{logical indicating whether to stratify by group.
If \code{FALSE} (default), weights will be the median across cells
for each group (point = topic weight for a given cell type).
If \code{TRUE}, cell-specific weights will be shown
(point = topic weight of a given cell).}
\item{min_prop}{scalar in [0,1]. When \code{facet = TRUE},
only cells with a weight > \code{min_prop} will be included.}
\item{ncol}{integer scalar specifying the number of facet columns.}
}
\value{
\code{ggplot} object
}
\description{
This function takes in the fitted NMF model and returns the
topic profiles learned for each cell \code{facet = FALSE} or cell type
\code{facet = TRUE}. Ideal training will return all the cell from the same
cell type to share a unique topic profile.
}
\examples{
library(ggplot2)
x <- mockSC()
y <- mockSP(x)
z <- getMGS(x)
res <- SPOTlight(x, y,
groups = x$type,
mgs = z,
group_id = "type",
verbose = FALSE)
plotTopicProfiles(res[[3]], x$type, facet = TRUE)
plotTopicProfiles(res[[3]], x$type, facet = FALSE)
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: man/runDeconvolution.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/runDeconvolution.R
\name{runDeconvolution}
\alias{runDeconvolution}
\title{Run Deconvolution using NNLS model}
\usage{
runDeconvolution(
x,
mod,
ref,
scale = TRUE,
min_prop = 0.01,
verbose = TRUE,
slot = "counts",
L1_nnls_topics = 0,
L2_nnls_topics = 0,
L1_nnls_prop = 0,
L2_nnls_prop = 0,
threads = 0,
...
)
}
\arguments{
\item{x}{mixture dataset. Can be a numeric matrix,
\code{SingleCellExperiment} or \code{SpatialExperiment}}
\item{mod}{object as obtained from trainNMF.}
\item{ref}{object of class matrix containing the topic profiles for each cell
type as obtained from trainNMF.}
\item{scale}{logical specifying whether to scale single-cell counts to unit
variance. This gives the user the option to normalize the data beforehand
as you see fit (CPM, FPKM, ...) when passing a matrix or specifying the
slot from where to extract the count data.}
\item{min_prop}{scalar in [0,1] setting the minimum contribution
expected from a cell type in \code{x} to observations in \code{y}.
By default 0.}
\item{verbose}{logical. Should information on progress be reported?}
\item{slot}{If the object is of class \code{SpatialExperiment} indicates
matrix to use. By default "counts".}
\item{L1_nnls_topics, L1_nnls_prop}{LASSO penalty in the range (0, 1] for NNLS
when computing cell type topic profiles and cell type proportions
respectively. Larger values remove "noisy" contributions more aggressively.}
\item{L2_nnls_topics, L2_nnls_prop}{RIDGE penalty >0 for NNLS when computing
cell type topic profiles and cell type proportions respectively.
Larger values remove "noisy" contributions more aggressively.}
\item{threads}{number of threads to use, default 0 (all threads)}
\item{...}{additional parameters.}
}
\value{
base a list where the first element is a list giving the NMF model and
the second is a matrix containing the topic profiles learnt.
}
\description{
This function takes in the mixture data, the trained model & the
topic profiles and returns the proportion of each cell type within each
mixture
}
\examples{
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
res <- trainNMF(
x = sce,
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene")
# Run deconvolution
decon <- runDeconvolution(
x = spe,
mod = res[["mod"]],
ref = res[["topic"]])
}
\author{
Marc Elosua Bayes, Zach DeBruine, and Helena L Crowell
}
================================================
FILE: man/trainNMF.Rd
================================================
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/trainNMF.R
\name{trainNMF}
\alias{trainNMF}
\title{train NMF model}
\usage{
trainNMF(
x,
y = NULL,
groups = NULL,
mgs,
n_top = NULL,
gene_id = "gene",
group_id = "cluster",
weight_id = "weight",
hvg = NULL,
scale = TRUE,
verbose = TRUE,
L1_nmf = 0,
L2_nmf = 0,
tol = 1e-05,
maxit = 100,
threads = 0,
slot_sc = "counts",
...
)
}
\arguments{
\item{x}{single-cell dataset. Can be a numeric matrix, Can be a
numeric matrix or \code{SingleCellExperiment}.}
\item{y}{Null if you want to train the model with all the genes in the SC
data or a character vector with the rownames of the mixture dataset to
subset the gene set used to the intersection between them.}
\item{groups}{character vector of group labels for cells in \code{x}.
When \code{x} is a \code{SingleCellExperiment}.,
defaults to \code{colLabels(x)} and \code{Idents(x)}, respectively.
Make sure groups is not a Factor.}
\item{mgs}{\code{data.frame} or \code{DataFrame} of marker genes.
Must contain columns holding gene identifiers, group labels and
the weight (e.g., logFC, -log(p-value) a feature has in a given group.}
\item{n_top}{integer scalar specifying the number of markers to select per
group. By default NULL uses all the marker genes to initialize the model.}
\item{gene_id, group_id, weight_id}{character specifying the column
in \code{mgs} containing gene identifiers, group labels and weights,
respectively.}
\item{hvg}{character vector containing hvg to include in the model.
By default NULL.}
\item{scale}{logical specifying whether to scale single-cell counts to unit
variance. This gives the user the option to normalize the data beforehand
as you see fit (CPM, FPKM, ...) when passing a matrix or specifying the
slot from where to extract the count data.}
\item{verbose}{logical. Should information on progress be reported?}
\item{L1_nmf}{LASSO penalty in the range (0, 1] for NMF,
larger values increase sparsity of each factor}
\item{L2_nmf}{RUDGE penalty >0 for NMF,
larger values increase angle between factors and thus sparsity.}
\item{tol}{tolerance of the NMF model at convergence, the Pearson correlation
distance between models across consecutive iterations (1e-5 is publication quality)}
\item{maxit}{maximum number of NMF iterations for fitting}
\item{threads}{number of threads to use, default 0 (all threads)}
\item{slot_sc}{If the object is of class \code{SingleCellExperiment}
indicates matrix to use. By default "counts".}
\item{...}{additional parameters.}
}
\value{
a list where the first element is a list with the NMF model
information and the second is a matrix containing the topic profiles
learnt per cell type.
}
\description{
This is the training function used by SPOTLight. This function
takes in single cell expression data, trains the model and learns topic
profiles for each cell type
}
\examples{
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
res <- trainNMF(
x = sce,
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene")
# Get NMF model
res[["mod"]]
# Get topic profiles
res[["topic"]]
}
\author{
Marc Elosua Bayes & Helena L Crowell
}
================================================
FILE: src/Makevars
================================================
# Standard portable configuration - let R handle LAPACK/BLAS
PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
# PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_LIBS)
PKG_CXXFLAGS = $(SHLIB_OPENMP_CXXFLAGS) -DEIGEN_INITIALIZE_MATRICES_BY_ZERO -DEIGEN_NO_DEBUG
CXX_STD = CXX11
================================================
FILE: src/Makevars.win
================================================
PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
PKG_CXXFLAGS = $(SHLIB_OPENMP_CXXFLAGS) -DEIGEN_INITIALIZE_MATRICES_BY_ZERO -DEIGEN_NO_DEBUG
CXX_STD = CXX11
================================================
FILE: src/RcppExports.cpp
================================================
// Generated by using Rcpp::compileAttributes() -> do not edit by hand
// Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#include <RcppEigen.h>
#include <Rcpp.h>
using namespace Rcpp;
#ifdef RCPP_USE_GLOBAL_ROSTREAM
Rcpp::Rostream<true>& Rcpp::Rcout = Rcpp::Rcpp_cout_get();
Rcpp::Rostream<false>& Rcpp::Rcerr = Rcpp::Rcpp_cerr_get();
#endif
// predict_nmf
Eigen::MatrixXd predict_nmf(Rcpp::S4& A_, Eigen::MatrixXd& w, const double L1, const double L2, const int threads);
RcppExport SEXP _SPOTlight_predict_nmf(SEXP A_SEXP, SEXP wSEXP, SEXP L1SEXP, SEXP L2SEXP, SEXP threadsSEXP) {
BEGIN_RCPP
Rcpp::RObject rcpp_result_gen;
Rcpp::traits::input_parameter< Rcpp::S4& >::type A_(A_SEXP);
Rcpp::traits::input_parameter< Eigen::MatrixXd& >::type w(wSEXP);
Rcpp::traits::input_parameter< const double >::type L1(L1SEXP);
Rcpp::traits::input_parameter< const double >::type L2(L2SEXP);
Rcpp::traits::input_parameter< const int >::type threads(threadsSEXP);
rcpp_result_gen = Rcpp::wrap(predict_nmf(A_, w, L1, L2, threads));
return rcpp_result_gen;
END_RCPP
}
// run_nmf
Rcpp::List run_nmf(const Rcpp::S4& A_, const Rcpp::S4& At_, const double tol, const uint16_t maxit, const bool verbose, const double L1, const double L2, const uint16_t threads, Eigen::MatrixXd w);
RcppExport SEXP _SPOTlight_run_nmf(SEXP A_SEXP, SEXP At_SEXP, SEXP tolSEXP, SEXP maxitSEXP, SEXP verboseSEXP, SEXP L1SEXP, SEXP L2SEXP, SEXP threadsSEXP, SEXP wSEXP) {
BEGIN_RCPP
Rcpp::RObject rcpp_result_gen;
Rcpp::traits::input_parameter< const Rcpp::S4& >::type A_(A_SEXP);
Rcpp::traits::input_parameter< const Rcpp::S4& >::type At_(At_SEXP);
Rcpp::traits::input_parameter< const double >::type tol(tolSEXP);
Rcpp::traits::input_parameter< const uint16_t >::type maxit(maxitSEXP);
Rcpp::traits::input_parameter< const bool >::type verbose(verboseSEXP);
Rcpp::traits::input_parameter< const double >::type L1(L1SEXP);
Rcpp::traits::input_parameter< const double >::type L2(L2SEXP);
Rcpp::traits::input_parameter< const uint16_t >::type threads(threadsSEXP);
Rcpp::traits::input_parameter< Eigen::MatrixXd >::type w(wSEXP);
rcpp_result_gen = Rcpp::wrap(run_nmf(A_, At_, tol, maxit, verbose, L1, L2, threads, w));
return rcpp_result_gen;
END_RCPP
}
static const R_CallMethodDef CallEntries[] = {
{"_SPOTlight_predict_nmf", (DL_FUNC) &_SPOTlight_predict_nmf, 5},
{"_SPOTlight_run_nmf", (DL_FUNC) &_SPOTlight_run_nmf, 9},
{NULL, NULL, 0}
};
RcppExport void R_init_SPOTlight(DllInfo *dll) {
R_registerRoutines(dll, NULL, CallEntries, NULL, NULL);
R_useDynamicSymbols(dll, FALSE);
}
================================================
FILE: src/nmf.cpp
================================================
// This C++ file contains a very fast NMF and NNLS implementation
//
// Author: Zach DeBruine (zacharydebruine@gmail.com)
// Source code largely derived from RcppML (github.com/zdebruine/RcppML)
//
// Subject to terms of GPL >=2 license
#ifndef EIGEN_NO_DEBUG
#define EIGEN_NO_DEBUG
#endif
#ifndef EIGEN_INITIALIZE_MATRICES_BY_ZERO
#define EIGEN_INITIALIZE_MATRICES_BY_ZERO
#endif
//[[Rcpp::depends(RcppEigen)]]
#include <RcppEigen.h>
//[[Rcpp::plugins(openmp)]]
#ifdef _OPENMP
#include <omp.h>
#endif
// SPARSE MATRIX CLASS
class spmat {
public:
// public member objects
Rcpp::NumericVector x;
Rcpp::IntegerVector i, p, Dim;
// constructors
spmat(Rcpp::NumericVector x, Rcpp::IntegerVector i, Rcpp::IntegerVector p, Rcpp::IntegerVector Dim) : x(x), i(i), p(p), Dim(Dim) {}
spmat(const Rcpp::S4& s) {
if (!s.hasSlot("x") || !s.hasSlot("i") || !s.hasSlot("p") || !s.hasSlot("Dim"))
Rcpp::stop("provided object could not be converted to a sparse matrix in C++. Sparse matrices must generally be a Matrix::dgCMatrix");
x = s.slot("x");
i = s.slot("i");
p = s.slot("p");
Dim = s.slot("Dim");
}
spmat() {}
size_t rows() { return Dim[0]; }
size_t cols() { return Dim[1]; }
// const column iterator
class InnerIterator {
public:
InnerIterator(spmat& ptr, int col) : ptr(ptr), col_(col), index(ptr.p[col]), max_index(ptr.p[col + 1]) {}
operator bool() const { return (index < max_index); }
InnerIterator& operator++() {
++index;
return *this;
}
double& value() { return ptr.x[index]; }
int row() const { return ptr.i[index]; }
private:
spmat& ptr;
int col_, index, max_index;
};
};
// NMF HELPER FUNCTIONS
// Pearson correlation between two matrices
inline double cor(Eigen::MatrixXd& x, Eigen::MatrixXd& y) {
double x_i, y_i, sum_x = 0, sum_y = 0, sum_xy = 0, sum_x2 = 0, sum_y2 = 0;
const size_t n = x.size();
for (size_t i = 0; i < n; ++i) {
x_i = (*(x.data() + i));
y_i = (*(y.data() + i));
sum_x += x_i;
sum_y += y_i;
sum_xy += x_i * y_i;
sum_x2 += x_i * x_i;
sum_y2 += y_i * y_i;
}
return std::abs(1 - (n * sum_xy - sum_x * sum_y) / std::sqrt((n * sum_x2 - sum_x * sum_x) * (n * sum_y2 - sum_y * sum_y)));
}
// fast symmetric matrix multiplication, A * A.transpose() - double
Eigen::MatrixXd AAt(const Eigen::MatrixXd& A) {
Eigen::MatrixXd AAt = Eigen::MatrixXd::Zero(A.rows(), A.rows());
AAt.selfadjointView<Eigen::Lower>().rankUpdate(A);
AAt.triangularView<Eigen::Upper>() = AAt.transpose();
AAt.diagonal().array() += 1e-15;
return AAt;
}
// scale rows in w (or h) to sum to 1 and put previous rowsums in d
void scale(Eigen::MatrixXd& w, Eigen::VectorXd& d) {
d = w.rowwise().sum();
d.array() += 1e-15;
for (size_t i = 0; i < w.rows(); ++i)
for (size_t j = 0; j < w.cols(); ++j)
w(i, j) /= d(i);
};
// calculate sort index of vector "d" in decreasing order
inline std::vector<int> sort_index(const Eigen::VectorXd& d) {
std::vector<int> idx(d.size());
std::iota(idx.begin(), idx.end(), 0);
sort(idx.begin(), idx.end(), [&d](size_t i1, size_t i2) { return d[i1] > d[i2]; });
return idx;
}
// reorder rows in dynamic matrix "x" by integer vector "ind"
inline Eigen::MatrixXd reorder_rows(const Eigen::MatrixXd& x, const std::vector<int>& ind) {
Eigen::MatrixXd x_reordered(x.rows(), x.cols());
for (unsigned int i = 0; i < ind.size(); ++i)
x_reordered.row(i) = x.row(ind[i]);
return x_reordered;
}
// reorder elements in vector "x" by integer vector "ind"
inline Eigen::VectorXd reorder(const Eigen::VectorXd& x, const std::vector<int>& ind) {
Eigen::VectorXd x_reordered(x.size());
for (unsigned int i = 0; i < ind.size(); ++i)
x_reordered(i) = x(ind[i]);
return x_reordered;
}
// NNLS SOLVER
// optimized and modified from github.com/linxihui/NNLM "c_nnls" function
inline void nnls(Eigen::MatrixXd& a, Eigen::VectorXd& b, Eigen::MatrixXd& h, const size_t sample) {
double tol = 1;
for (uint8_t it = 0; it < 100 && (tol / b.size()) > 1e-8; ++it) {
tol = 0;
for (size_t i = 0; i < h.rows(); ++i) {
double diff = b(i) / a(i, i);
if (-diff > h(i, sample)) {
if (h(i, sample) != 0) {
b -= a.col(i) * -h(i, sample);
tol = 1;
h(i, sample) = 0;
}
} else if (diff != 0) {
h(i, sample) += diff;
b -= a.col(i) * diff;
tol += std::abs(diff / (h(i, sample) + 1e-15));
}
}
}
}
// NMF PROJECTION
void c_predict(spmat A, const Eigen::MatrixXd& w, Eigen::MatrixXd& h, const double L1, const double L2, const int threads) {
Eigen::MatrixXd a = AAt(w);
a.diagonal().array() *= (1 - L2);
#ifdef _OPENMP
#pragma omp parallel for num_threads(threads)
#endif
for (size_t i = 0; i < h.cols(); ++i) {
if (A.p[i] == A.p[i + 1]) continue;
Eigen::VectorXd b = Eigen::VectorXd::Zero(h.rows());
for (spmat::InnerIterator it(A, i); it; ++it)
b += it.value() * w.col(it.row());
b.array() -= L1;
nnls(a, b, h, i);
}
}
//[[Rcpp::export(rng = FALSE)]]
Eigen::MatrixXd predict_nmf(Rcpp::S4& A_, Eigen::MatrixXd& w, const double L1, const double L2, const int threads) {
spmat A(A_);
Eigen::MatrixXd h(w.rows(), A.cols());
if (w.rows() == A.rows() && w.cols() != A.rows())
w = w.transpose();
c_predict(A, w, h, L1, L2, threads);
return h;
}
// NMF FUNCTION
//[[Rcpp::export(rng = FALSE)]]
Rcpp::List run_nmf(const Rcpp::S4& A_, const Rcpp::S4& At_, const double tol, const uint16_t maxit, const bool verbose,
const double L1, const double L2, const uint16_t threads, Eigen::MatrixXd w) {
spmat A(A_), At(At_);
// check validity of parameters
if (L1 >= 1 || L2 >= 1 || L1 < 0 || L2 < 0)
Rcpp::stop("L1 and L2 must be strictly in the range (0,1]");
if (A.rows() != At.cols() || A.cols() != At.rows())
Rcpp::stop("A and At are not transpose-identical");
if (w.rows() == A.rows())
w = w.transpose();
else if (w.cols() != A.rows())
Rcpp::stop("dimensions of A and w are incompatible!");
if (verbose) Rprintf("\n%4s | %8s \n---------------\n", "iter", "tol");
Eigen::MatrixXd h(w.rows(), A.cols());
Eigen::VectorXd d(w.rows());
double tol_ = 1;
// alternating least squares updates of h and then w
for (uint16_t iter_ = 0; iter_ < maxit && tol_ > tol; ++iter_) {
Eigen::MatrixXd w_it = w;
c_predict(A, w, h, L1, L2, threads); // update h
scale(h, d);
Rcpp::checkUserInterrupt();
c_predict(At, h, w, L1, L2, threads); // update w
scale(w, d);
// calculate tolerance of the model fit to detect convergence
tol_ = cor(w, w_it); // absolute correlation between "w" across consecutive iterations
if (verbose) Rprintf("%4d | %8.2e\n", iter_ + 1, tol_);
Rcpp::checkUserInterrupt();
}
// sort factors in the model by diagonal weight
std::vector<int> indx = sort_index(d);
w = reorder_rows(w, indx);
d = reorder(d, indx);
h = reorder_rows(h, indx);
return Rcpp::List::create(
Rcpp::Named("w") = w.transpose(),
Rcpp::Named("d") = d,
Rcpp::Named("h") = h);
}
================================================
FILE: tests/testthat/test-SPOTlight-steps.R
================================================
library(SPOTlight)
library(SingleCellExperiment)
# library(RcppML)
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
# Function to run the checks
.checks <- function(decon, sce) {
mtr <- decon[[1]]
rss <- decon[[2]]
expect_is(decon, "list")
expect_is(mtr, "matrix")
expect_is(rss, "numeric")
expect_identical(ncol(mtr), length(unique(sce$type)))
expect_identical(nrow(mtr), length(rss))
}
###############################
#### Run SPOTlight wrapper ####
###############################
set.seed(687)
res1 <- SPOTlight(
x = counts(sce),
y = counts(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene",
pnmf = "NMF"
)
################################
#### Run SPOTlight by steps ####
################################
set.seed(687)
# Train NMF
mod_ls <- trainNMF(
x = counts(sce),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
res2 <- runDeconvolution(
x = spe,
mod = mod_ls[["mod"]],
ref = mod_ls[["topic"]]
)
# NMF ----
test_that("SPOTlight vs SPOTlight-steps", {
# basis and coef should be the same between SPOTlight and SPOTlight-steps
expect_true(all(res1[["NMF"]]$w == mod_ls[["mod"]]$w))
expect_true(all(res1[["NMF"]]$h == res2[["NMF"]]$h))
# Deconvolution results are the same
# expect_true(all(res1[["mat"]] == res2[["mat"]]))
expect_true(mean(abs(res1[["mat"]] - res2[["mat"]])) < 0.01)
# actually check the estimates are legit
# (MSE < 0.1 compared to simulated truth)
sim <- S4Vectors::metadata(spe)[[1]]
mse <- mean((res2[["mat"]] - sim)^2)
expect_true(mse < 0.01)
})
================================================
FILE: tests/testthat/test-SPOTlight.R
================================================
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
# Create SpatialExperiment
spe1 <- SpatialExperiment::SpatialExperiment(
assay = list(counts = SingleCellExperiment::counts(spe)),
colData = SummarizedExperiment::colData(spe))
# Function to run the checks
.checks <- function(res, sce) {
mtr <- res[[1]]
rss <- res[[2]]
mod <- res[[3]]
expect_is(res, "list")
expect_is(mtr, "matrix")
expect_is(rss, "numeric")
expect_is(mod, "list")
expect_identical(ncol(mtr), length(unique(sce$type)))
expect_identical(sort(colnames(mtr)), sort(unique(as.character(sce$type))))
expect_identical(nrow(mtr), length(rss))
expect_identical(sort(rownames(mtr)), sort(names(rss)))
}
# .checks <- function(res, sce) {
# mod <- res[[1]]
# mtr <- res[[2]]
# expect_is(res, "list")
# expect_is(mtr, "matrix")
# expect_is(mod, "list")
# expect_identical(ncol(mtr), length(unique(sce$type)))
# expect_identical(nrow(mtr), ncol(mod$w))
# expect_identical(nrow(mtr), nrow(mod$h))
# }
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ---- Check SPOTlight x, y inputs -------------------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# .SPOTlight with SCE ----
test_that("SPOTlight x SCE rcpp", {
res <- SPOTlight(
x = sce,
y = as.matrix(counts(spe)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with SPE ----
test_that("SPOTlight x SCE spatial rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = spe,
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with SPE ----
test_that("SPOTlight x SCE spatial rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = spe1,
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with sparse matrix sc ----
test_that("SPOTlight x dgCMatrix SC rcpp", {
res <- SPOTlight(
x = Matrix::Matrix(counts(sce), sparse = TRUE),
y = as.matrix(counts(spe)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with sparse matrix sp ----
test_that("SPOTlight x dgCMatrix SP rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = Matrix::Matrix(counts(spe), sparse = TRUE),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with sparse matrix sc ----
test_that("SPOTlight x DelayedMatrix SC rcpp", {
res <- SPOTlight(
x = DelayedArray::DelayedArray(counts(sce)),
y = as.matrix(counts(spe)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with sparse matrix sp ----
test_that("SPOTlight x DelayedMatrix SP rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = DelayedArray::DelayedArray(counts(sce)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with matrices in both ----
test_that("SPOTlight x matrices rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = as.matrix(counts(spe)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
})
# .SPOTlight with matrices in both and HVG----
test_that("SPOTlight x hvg rcpp", {
res <- SPOTlight(
x = as.matrix(counts(sce)),
y = as.matrix(counts(spe)),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene",
hvg = row.names(sce)[seq_len(50)]
)
.checks(res, sce)
})
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ---- Check SPOTlight x, y inputs with NMF ----------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# .SPOTlight with SCE ----
# test_that("SPOTlight x SCE NMF", {
# res <- SPOTlight(
# x = sce,
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with SPE ----
# test_that("SPOTlight x SCE spatial NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = spe,
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with SPE ----
# test_that("SPOTlight x SCE spatial NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = spe1,
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with Seurat SC ----
# test_that("SPOTlight x SEC NMF", {
# res <- SPOTlight(
# x = sec,
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with Seurat SP ----
# test_that("SPOTlight x SEP NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = spe,
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with sparse matrix sc ----
# test_that("SPOTlight x dgCMatrix SC NMF", {
# res <- SPOTlight(
# x = Matrix::Matrix(counts(sce), sparse = TRUE),
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
# .checks(res, sce)
# })
#
# # .SPOTlight with sparse matrix sp ----
# test_that("SPOTlight x dgCMatrix SP NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = Matrix::Matrix(counts(spe), sparse = TRUE),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with sparse matrix sc ----
# test_that("SPOTlight x DelayedMatrix SC NMF", {
# res <- SPOTlight(
# x = DelayedArray::DelayedArray(counts(sce)),
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
# .checks(res, sce)
# })
#
# # .SPOTlight with sparse matrix sp ----
# test_that("SPOTlight x DelayedMatrix SP NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = DelayedArray::DelayedArray(counts(sce)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with matrices in both ----
# test_that("SPOTlight x matrices NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # .SPOTlight with matrices in both and HVG----
# test_that("SPOTlight x hvg NMF", {
# res <- SPOTlight(
# x = as.matrix(counts(sce)),
# y = as.matrix(counts(spe)),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene",
# hvg = row.names(sce)[seq_len(50)]
# )
#
# .checks(res, sce)
# })
#
================================================
FILE: tests/testthat/test-plotCorrelationMatrix.R
================================================
set.seed(321)
x <- replicate(m <- 25, runif(10, 0, 1))
# Add an anticorrelated column
x[, 24] <- seq(0, 1, length.out = 10)
x[, 25] <- seq(1, 0, length.out = 10)
rownames(x) <- paste0("spot", seq_len(nrow(x)))
colnames(x) <- paste0("type", seq_len(ncol(x)))
.checks <- function(p) {
expect_is(p, "ggplot")
expect_true(all(p$data$p >= 0))
expect_true(all(p$data$p <= 1))
expect_true(is.numeric(p$data$value))
expect_true(max(p$data$value) == 1)
expect_true(nrow(p$data) == m * m)
}
# plotCorrelationMatrix basic ----
test_that("plotCorrelationMatrix basic", {
# The most basic example
p <- plotCorrelationMatrix(x = x)
.checks(p)
})
# plotCorrelationMatrix() spearman correlation ----
test_that("plotCorrelationMatrix() spearman", {
# The most basic example
p <- plotCorrelationMatrix(
x = x,
cor.method = "kendall"
)
.checks(p)
})
# plotCorrelationMatrix() ----
test_that("plotCorrelationMatrix() insig", {
# The most basic example
p <- plotCorrelationMatrix(
x = x,
insig = "pch"
)
.checks(p)
# This adds an extra layer with the X on top of the insig
expect_true(length(p$layers) == 2)
})
# plotCorrelationMatrix() colors ----
test_that("plotCorrelationMatrix() colors", {
# The most basic example
p <- plotCorrelationMatrix(
x = x,
colors = c("#FF00FF", "#FFFFFF", "#000000")
)
.checks(p)
g <- ggplot_build(p)
# max color
i <- which(p$data$value == max(p$data$value))[[1]]
expect_identical(g$data[[1]][i, ][, "fill"], "#000000")
# 0 color
j <- which(p$data$value == 0)[[1]]
expect_identical(g$data[[1]][j, ][, "fill"], "#FFFFFF")
# min color
k <- which(p$data$value == min(p$data$value))[[1]]
expect_identical(g$data[[1]][k, ][, "fill"], "#FF00FF")
})
# plotCorrelationMatrix() hc.order ----
test_that("plotCorrelationMatrix() hc.order", {
# The most basic example
p <- plotCorrelationMatrix(
x = x,
hc.order = FALSE
)
.checks(p)
# Make sure the order is no changed
expect_equal(as.character(p$data$Var1[seq_len(ncol(x))]), colnames(x))
})
# plotCorrelationMatrix() p.mat ----
test_that("plotCorrelationMatrix() p.mat", {
# The most basic example
p <- plotCorrelationMatrix(
x = x,
p.mat = FALSE
)
# Make sure the p value is not computed
expect_is(p, "ggplot")
expect_true(is.numeric(p$data$value))
expect_true(max(p$data$value) == 1)
expect_true(nrow(p$data) == m * m)
expect_true(all(is.na(p$data$pvalue)))
expect_true(all(is.na(p$data$signif)))
})
================================================
FILE: tests/testthat/test-plotImage.R
================================================
set.seed(321)
# x_path <- paste0(system.file(package = "SPOTlight"), "/extdata/image.png")
# x_path <- "../../inst/extdata/SPOTlight.png"
x_path <- paste0(system.file(package = "SPOTlight"), "/extdata/SPOTlight.png")
# plotImage() ----
test_that("plotImage path", {
# image
x <- x_path
plt <- plotImage(x = x)
expect_true(is_ggplot(plt))
})
# plotImage() ----
test_that("plotImage array", {
# image
x <- png::readPNG(x_path)
plt <- plotImage(x = x)
expect_true(is_ggplot(plt))
})
# Can't run this on Bioconductor since it doesn't accept github packages
# test_that("plotImage Seurat", {
# # if (!"SeuratData" %in% installed.packages()) {
# # devtools::install_github("satijalab/seurat-data")
# # }
# # image
# if (!"stxBrain.SeuratData" %in% suppressWarnings(SeuratData::InstalledData()$Dataset))
# suppressWarnings(SeuratData::InstallData(ds = "stxBrain.SeuratData"))
#
# x <- suppressWarnings(SeuratData::LoadData(
# ds = "stxBrain",
# type = "anterior1"))
#
# plt <- plotImage(x = x)
# expect_equal(class(plt)[1], "gg")
# })
test_that("plotImage SPE", {
# image
library(ExperimentHub)
eh <- ExperimentHub() # initialize hub instance
q <- query(eh, "TENxVisium") # retrieve 'TENxVisiumData' records
id <- q$ah_id[1] # specify dataset ID to load
x <- eh[[id]]
plt <- plotImage(x = x)
expect_true(is_ggplot(plt))
})
================================================
FILE: tests/testthat/test-plotInteractions.R
================================================
# helper to record base R plot
# plotNetwork() ----
# record base R plot
. <- \(.) {
pdf(NULL)
dev.control(displaylist = "enable")
set.seed(1)
.
. <- recordPlot()
invisible(dev.off())
return(.)
}
# mock up some data
set.seed(321)
x <- replicate(m <- 10, rnorm(n <- 100, runif(1, -1, 1)))
# Add columns to check characteristics of interest
x[, 8] <- 1
x[, 9] <- 1
x[, 10] <- -1
test_that("plotInteractions(), which = 'heatmap', metric = default", {
p <- plotInteractions(x, "heatmap")
expect_is(p, "ggplot")
expect_true(all(p$data$p >= 0))
expect_true(all(p$data$p <= 1))
expect_true(is.integer(p$data$n))
expect_true(nrow(p$data) == m * (m - 1) / 2)
})
test_that("plotInteractions(), which = 'heatmap', metric = 'jaccard'", {
p <- plotInteractions(x, "heatmap", "jaccard")
expect_is(p, "ggplot")
expect_true(all(p$data$p >= 0))
expect_true(all(p$data$p <= 1))
expect_true(is.integer(p$data$n))
expect_true(nrow(p$data) == m * (m - 1) / 2)
})
test_that("plotInteractions(), which = 'heatmap', tunning", {
p <- plotInteractions(x, "heatmap") +
scale_fill_gradient(low = "#FFFF00", high = "#FF0000")
# Same base checks
expect_is(p, "ggplot")
expect_true(all(p$data$p >= 0))
expect_true(all(p$data$p <= 1))
expect_true(is.integer(p$data$n))
expect_true(nrow(p$data) == m * (m - 1) / 2)
# Color checks
g <- ggplot_build(p)
d1 <- g$data[[1]]
d2 <- g$data[[2]]
# Access through tiles coordinates
# min <- d1[d1$x == max(d1$x) & d1$y == max(d1$y), "fill"]
# expect_equal(min, "#FFFF00")
expect_true("#FFFF00" %in% d1$fill)
# max <- d1[d1$x == 7 & d1$y == 1, "fill"]
# expect_equal(max, "#FF0000")
expect_true("#FF0000" %in% d1$fill)
# na <- d2[d2$x == 2 & d2$y == 1, "fill"]
# expect_equal(na, "grey50")
expect_true("grey50" %in% d2$fill)
})
test_that("plotInteractions(), which = 'network', metric = 'default'", {
p <- .(plotInteractions(x, "network"))
expect_is(p, "recordedplot")
p[[1]][[6]][[2]]$col
})
test_that("plotInteractions(), which = 'network', metric = 'jaccard'", {
p <- .(plotInteractions(x, "network", "jaccard"))
expect_is(p, "recordedplot")
p[[1]][[6]][[2]]$col
})
test_that("plotInteractions(), which = 'network', tunning", {
p <- .(plotInteractions(
x,
which = "network",
edge.color = "cyan",
vertex.color = "pink",
vertex.label.font = 2,
vertex.label.color = "maroon"
))
expect_is(p, "recordedplot")
# Test edge color
expect_equal(p[[1]][[6]][[2]]$col, "cyan")
# Vertex label color and font
# expect_equal(p[[1]][[10]][[2]][[9]], "maroon")
# expect_equal(p[[1]][[10]][[2]][[10]], 2)
# Vertex color
# expect_equal(p[[1]][[8]][[2]][[7]], "pink")
})
================================================
FILE: tests/testthat/test-plotSpatialScatterpie.R
================================================
set.seed(321)
# Coordinates
x <- matrix(nrow = 10, data = c(seq_len(10), 10:1))
rownames(x) <- paste0("spot", seq_len(nrow(x)))
colnames(x) <- c("coord_x", "coord_y")
# Proportions
y <- replicate(m <- 5, runif(10, 0, 1))
y <- y / rowSums(y)
rownames(y) <- paste0("spot", seq_len(nrow(y)))
colnames(y) <- paste0("type", seq_len(ncol(y)))
# image
img <- paste0(system.file(package = "SPOTlight"), "/extdata/SPOTlight.png")
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie with matrix and bad colnames", {
plt <- plotSpatialScatterpie(
x = x,
y = y
)
expect_true(is_ggplot(plt))
})
colnames(x) <- c("imagecol", "imagerow")
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie with matrix and bad colnames", {
plt <- plotSpatialScatterpie(
x = x,
y = y
)
expect_true(is_ggplot(plt))
})
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie - image", {
plt <- plotSpatialScatterpie(
x = x,
y = y,
img = img
)
expect_true(is_ggplot(plt))
})
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie - type subset", {
plt <- plotSpatialScatterpie(
x = x,
y = y,
cell_types = colnames(y)[seq_len(3)]
)
expect_true(is_ggplot(plt))
})
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie - alpha", {
plt <- plotSpatialScatterpie(
x = x,
y = y,
scatterpie_alpha = 0.5
)
expect_true(is_ggplot(plt))
expect_lt(plt$layers[[1]]$aes_params$alpha, 1)
})
# plotSpatialScatterpie() ----
test_that("plotSpatialScatterpie - pie_scale", {
plt <- plotSpatialScatterpie(
x = x,
y = y,
pie_scale = 0.1
)
expect_true(is_ggplot(plt))
})
library(SpatialExperiment)
example(read10xVisium, echo = FALSE)
# Proportions
spe_y <- replicate(m <- 5, runif(ncol(spe), 0, 1))
spe_y <- spe_y / rowSums(spe_y)
rownames(spe_y) <- colnames(spe)
colnames(spe_y) <- paste0("type", seq_len(ncol(spe_y)))
# plotSpatialScatterpie() img TRUE----
test_that("plotSpatialScatterpie - image", {
plt <- plotSpatialScatterpie(
x = spe,
y = spe_y,
img = TRUE
)
expect_true(is_ggplot(plt))
# Make sure there is an image
expect_true(is(plt$layers[[1]]$geom, "GeomCustomAnn"))
})
# plotSpatialScatterpie() img TRUE----
test_that("plotSpatialScatterpie - spots on image", {
plt <- plotSpatialScatterpie(
x = spe,
y = spe_y,
img = TRUE
)
expect_true(is_ggplot(plt))
# Make sure there is an image
expect_true(is(plt$layers[[1]]$geom, "GeomCustomAnn"))
# Check the spots are on within the image coordinates
x_y_min_max <- plt$layers[[1]]$geom_params
point_df <- plt$layers[[2]]$data
expect_true(max(point_df$coord_x) <= x_y_min_max$xmax)
expect_true(min(point_df$coord_x) >= x_y_min_max$xmin)
expect_true(max(point_df$coord_y) <= x_y_min_max$ymax)
expect_true(min(point_df$coord_y) >= x_y_min_max$ymin)
})
================================================
FILE: tests/testthat/test-plotTopicProfiles.R
================================================
set.seed(123)
x <- mockSC(nc = 50, nt = 3)
y <- mockSP(x)
z <- getMGS(x)
res <- SPOTlight(
x,
y,
groups = x$type,
mgs = z,
group_id = "type",
verbose = FALSE)
test_that("plotTopicProfiles common", {
p <- plotTopicProfiles(x = res[[3]], y = x$type, facet = FALSE, min_prop = 0.1)
expect_is(p, "ggplot")
expect_equal(nrow(p$data), 9)
expect_equal(sort(unique(p$data$group)), as.character(sort(unique(x$type))))
expect_equal(ncol(p$data), 3)
expect_equal(
as.character(sort(unique(p$data$topic))),
as.character(seq_len(length(unique(x$type)))))
g <- ggplot_build(p)
expect_true(all(c("#3D2BFF", "#D3D3D3") %in% unique(g$data[[1]]$colour)))
})
test_that("plotTopicProfiles facet", {
p <- plotTopicProfiles(res[[3]], x$type, facet = TRUE, min_prop = 0.1)
expect_is(p, "ggplot")
expect_equal(nrow(p$data), 160)
expect_equal(ncol(p$data), 4)
g <- ggplot_build(p)
expect_true(all(c("#3D2BFF", "#4931FE", "#D3D3D3") %in% unique(g$data[[1]]$colour)))
})
================================================
FILE: tests/testthat/test-runDeconvolution.R
================================================
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
# Create SpatialExperiment
spe1 <- SpatialExperiment::SpatialExperiment(
assay = list(counts = SingleCellExperiment::counts(spe)),
colData = SummarizedExperiment::colData(spe))
# Function to run the checks
.checks <- function(decon, sce, spe) {
mtr <- decon[[1]]
rss <- decon[[2]]
expect_is(decon, "list")
expect_is(mtr, "matrix")
expect_is(rss, "numeric")
expect_identical(ncol(mtr), length(unique(sce$type)))
expect_identical(sort(colnames(mtr)), sort(unique(as.character(sce$type))))
expect_identical(nrow(mtr), length(rss))
expect_identical(sort(rownames(mtr)), sort(names(rss)))
dif <- rowSums((mtr - metadata(spe)$props)^2)
median_ss <- median(dif)
mean_ss <- mean(dif)
expect_true(mean_ss < 0.1 & median_ss < 0.1)
}
# Train NMF
res <- trainNMF(
x = as.matrix(counts(sce)),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ---- Check runDeconvolution x, y inputs ------------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# runDeconvolution with SCE ----
test_that("runDeconvolution x SCE", {
decon <- runDeconvolution(
x = spe,
mod = res[["mod"]],
ref = res[["topic"]]
)
.checks(decon, sce, spe)
})
test_that("runDeconvolution x SPE", {
decon <- runDeconvolution(
x = spe1,
mod = res[["mod"]],
ref = res[["topic"]]
)
.checks(decon, sce, spe)
})
# runDeconvolution with sparse matrix sp ----
test_that("runDeconvolution x dgCMatrix SP", {
decon <- runDeconvolution(
x = Matrix::Matrix(counts(spe), sparse = TRUE),
mod = res[["mod"]],
ref = res[["topic"]]
)
.checks(decon, sce, spe)
})
# runDeconvolution with sparse matrix sp ----
test_that("runDeconvolution x DelayedMatrix SP", {
decon <- runDeconvolution(
x = DelayedArray::DelayedArray(counts(spe)),
mod = res[["mod"]],
ref = res[["topic"]]
)
.checks(decon, sce, spe)
})
# runDeconvolution with matrices in both ----
test_that("runDeconvolution x matrices", {
decon <- runDeconvolution(
x = as.matrix(counts(spe)),
mod = res[["mod"]],
ref = res[["topic"]]
)
.checks(decon, sce, spe)
})
================================================
FILE: tests/testthat/test-trainNMF.R
================================================
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC(ng = 200, nc = 10, nt = 3)
spe <- mockSP(sce)
mgs <- getMGS(sce)
.checks <- function(res, sce) {
mod <- res[[1]]
mtr <- res[[2]]
expect_is(res, "list")
expect_is(mtr, "matrix")
expect_is(mod, "list")
expect_identical(ncol(mtr), length(unique(sce$type)))
expect_identical(nrow(mtr), ncol(mod$w))
expect_identical(nrow(mtr), nrow(mod$h))
}
# Unit test to verify that the topic with max weight in each cell aligns with type
.check_topic_alignment <- function(mod_h, topic) {
cell_pos <- c(1, 11, 21)
type_names <- rownames(topic)[1:3] # types 1 to 3
for (i in seq_along(cell_pos)) {
cell <- cell_pos[i]
type <- type_names[i]
expect_equal(which.max(mod_h[, cell]), which.max(topic[type, ]))
}
}
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ---- Check RCPP trainNMF x, y inputs -------------------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# trainNMF with SCE ----
test_that("rcpp trainNMF x SCE", {
set.seed(321)
res <- trainNMF(
x = sce,
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
.check_topic_alignment(res$mod$h, res$topic)
})
# trainNMF with sparse matrix sc ----
test_that("rcpp trainNMF x dgCMatrix SC", {
set.seed(321)
res <- trainNMF(
x = Matrix::Matrix(counts(sce), sparse = TRUE),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
.check_topic_alignment(res$mod$h, res$topic)
})
# trainNMF with sparse matrix sc ----
test_that("rcpp trainNMF x DelayedMatrix SC", {
set.seed(321)
res <- trainNMF(
x = DelayedArray::DelayedArray(counts(sce)),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
.check_topic_alignment(res$mod$h, res$topic)
})
# trainNMF with matrices in both ----
test_that("rcpp trainNMF x matrices", {
set.seed(321)
res <- trainNMF(
x = as.matrix(counts(sce)),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene"
)
.checks(res, sce)
.check_topic_alignment(res$mod$h, res$topic)
})
# trainNMF with matrices in both and HVG----
test_that("rcpp trainNMF x hvg", {
set.seed(321)
res <- trainNMF(
x = as.matrix(counts(sce)),
y = rownames(spe),
groups = sce$type,
mgs = mgs,
weight_id = "weight",
group_id = "type",
gene_id = "gene",
hvg = row.names(sce)[seq_len(50)]
)
.checks(res, sce)
.check_topic_alignment(res$mod$h, res$topic)
})
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ---- Check NMF::nmf trainNMF x, y inputs -------------------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# trainNMF with SCE ----
# test_that("NMF trainNMF x SCE", {
# res <- trainNMF(
# x = sce,
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # trainNMF with SPE ----
# test_that("NMF trainNMF x SEC", {
# res <- trainNMF(
# x = sec,
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # trainNMF with sparse matrix sc ----
# test_that("NMF trainNMF x dgCMatrix SC", {
# res <- trainNMF(
# x = Matrix::Matrix(counts(sce), sparse = TRUE),
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
# .checks(res, sce)
# })
#
# # trainNMF with sparse matrix sc ----
# test_that("NMF trainNMF x DelayedMatrix SC", {
# res <- trainNMF(
# x = DelayedArray::DelayedArray(counts(sce)),
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
# .checks(res, sce)
# })
#
# # trainNMF with matrices in both ----
# test_that("NMF trainNMF x matrices", {
# res <- trainNMF(
# x = as.matrix(counts(sce)),
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene"
# )
#
# .checks(res, sce)
# })
#
# # trainNMF with matrices in both and HVG----
# test_that("NMF trainNMF x hvg", {
# res <- trainNMF(
# x = as.matrix(counts(sce)),
# y = rownames(spe),
# groups = sce$type,
# pnmf = "NMF",
# mgs = mgs,
# weight_id = "weight",
# group_id = "type",
# gene_id = "gene",
# hvg = row.names(sce)[seq_len(50)]
# )
#
# .checks(res, sce)
# })
#
================================================
FILE: tests/testthat/test-utils.R
================================================
set.seed(321)
# mock up some single-cell, mixture & marker data
sce <- mockSC()
spe <- mockSP(sce)
mgs <- getMGS(sce)
# .scale_uv ----
test_that(".scale_uv()", {
x <- counts(sce)
y <- .scale_uv(x)
expect_is(y, "matrix")
expect_identical(dim(y), dim(x))
expect_identical(dimnames(y), dimnames(x))
expect_true(all(abs(1 - sparseMatrixStats::rowVars(y)) < 1e-12))
})
# default parameters
defs <- list(
gene_id = "gene",
group_id = "type",
weight_id = "weight",
verbose = FALSE
)
# NMF ----
test_that("NMF", {
x <- counts(sce)
y <- counts(spe)
groups <- sce$type
group_ids <- sort(unique(as.character(sce$type)))
n_groups <- length(group_ids)
# + trainNMF ----
# undetected genes should be filtered out
# and pass silently (i.e., without error)
i <- sample(rownames(x), 5)
j <- sample(rownames(y), 5)
x. <- x
x.[i, ] <- 0
y. <- y
y.[j, ] <- 0
args <- c(defs, list(x., rownames(y.), groups, mgs))
fit <- expect_silent(do.call(trainNMF, args))
mod <- fit[["mod"]]
expect_is(mod, "list")
expect_true(!all(c(i, j) %in% rownames(mod)))
# Only marker genes should be present - we don't use hvg here
expect_true(all(rownames(mod) %in% mgs$gene))
# valid call should give an object of class 'NMF'
# and dimension (#genes) x (#cells) x (#groups)
args <- c(defs, list(x, rownames(y), groups, mgs))
fit <- expect_silent(do.call(trainNMF, args))
mod <- fit[["mod"]]
expect_is(mod, "list")
# Remove genes since these can change depending on
# filtering, mgs, hvg, all 0...
expect_identical(
dimnames(mod$h)[1:2],
c(list(paste0("topic_", 1:nrow(mod$h))), dimnames(x)[2]))
expect_identical(
dimnames(mod$w)[1:2],
c(list(mgs$gene), list(paste0("topic_", 1:nrow(mod$h)))))
# + .topic_profiles ----
# should give a square numeric matrix
# of dimension (#groups) x (#groups)
ref <- .topic_profiles(mod, groups)
expect_is(ref, "matrix")
expect_equal(dim(ref), rep(n_groups, 2))
expect_identical(rownames(ref), group_ids)
expect_identical(colnames(ref), paste0("topic_", 1:nrow(ref)))
# + .pred_hp ----
fqs <- .pred_hp(x, mod)
expect_is(fqs, "matrix")
expect_true(is.numeric(x))
expect_true(all(fqs >= 0))
expect_equal(dim(fqs), c(n_groups, ncol(x)))
expect_identical(
dimnames(fqs),
list(paste0("topic_", 1:nrow(ref)), colnames(x)))
# + runDeconvolution ----
# should give a numeric matrix
# of dimension (#groups) x (#spots)
# with proportions (i.e., values in [0, 1])
res <- runDeconvolution(x = y, mod = mod, ref = ref)
mat <- res[[1]]
err <- res[[2]]
expect_is(mat, "matrix")
expect_true(is.numeric(err))
expect_true(is.numeric(x))
expect_true(all(mat >= 0 & mat <= 1))
expect_true(all(rowSums(mat) - 1 < 1e-12))
expect_identical(dimnames(mat), list(colnames(y), group_ids))
expect_identical(rownames(mat), names(err))
# actually check the estimates are legit
# (MSE < 0.1 compared to simulated truth)
sim <- S4Vectors::metadata(spe)[[1]]
mse <- mean((mat - sim)^2)
expect_true(mse < 0.2)
})
# image
library(ExperimentHub)
eh <- ExperimentHub() # initialize hub instance
q <- query(eh, "TENxVisium") # retrieve 'TENxVisiumData' records
id <- q$ah_id[1] # specify dataset ID to load
spe <- eh[[id]]
colLabels(spe) <- spe$sample_id
# .extract_counts
test_that(".extract_counts()", {
x <- suppressWarnings(.extract_counts(spe, slot = "counts"))
expect_identical(dim(counts(spe)), dim(x))
expect_identical(dimnames(spe), dimnames(x))
})
# .scale_uv
test_that("scale_uv()", {
x <- counts(sce)
y <- .scale_uv(x)
expect_is(y, "matrix")
expect_identical(dim(y), dim(x))
expect_identical(dimnames(y), dimnames(x))
expect_true(all(abs(1 - rowVars(y)) < 1e-12))
})
# .plot_image
x_path <- paste0(system.file(package = "SPOTlight"), "/extdata/SPOTlight.png")
test_that(".plot_image() SPE", {
img <- .extract_image(x_path)
plt <- .plot_image(img)
expect_true(is.array(img))
expect_true(is_ggplot(plt))
})
test_that(".plot_image() SPE", {
img <- .extract_image(spe)
plt <- .plot_image(img)
expect_true(is_ggplot(plt))
expect_true(is.matrix(img))
})
================================================
FILE: tests/testthat.R
================================================
set.seed(321)
library(testthat)
library(SPOTlight)
# plotImage() ----
test_check("SPOTlight")
================================================
FILE: vignettes/SPOTlight_kidney.Rmd
================================================
---
title: "Spatial Transcriptomics Deconvolution with `SPOTlight`"
date: "`r BiocStyle::doc_date()`"
author:
- name: Marc Elosua-Bayes
affiliation:
- &CNAG-CRG National Center for Genomic Analysis - Center for Genomic Regulation
- &UPF University Pompeu Fabra
email: marc.elosua@cnag.crg.eu
- name: Helena L. Crowell
affiliation:
- &IMLS Institute for Molecular Life Sciences, University of Zurich, Switzerland
- &SIB SIB Swiss Institute of Bioinformatics, University of Zurich, Switzerland
email: helena.crowell@uzh.ch
- name: Zach DeBruine
affiliation:
- &VAI Van Andel Institue
email: Zach.DeBruine@vai.edu
abstract: >
<p> Spatially resolved gene expression profiles are key to understand tissue organization and function. However, novel spatial transcriptomics (ST) profiling techniques lack single-cell resolution and require a combination with single-cell RNA sequencing (scRNA-seq) information to deconvolute the spatially indexed datasets. Leveraging the strengths of both data types, we developed SPOTlight, a computational tool that enables the integration of ST with scRNA-seq data to infer the location of cell types and states within a complex tissue. SPOTlight is centered around a seeded non-negative matrix factorization (NMF) regression, initialized using cell-type marker genes and non-negative least squares (NNLS) to subsequently deconvolute ST capture locations (spots).
package: "`r BiocStyle::pkg_ver('SPOTlight')`"
vignette: >
%\VignetteIndexEntry{"SPOTlight"}
%\VignettePackage{SPOTlight}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
output:
BiocStyle::html_document
editor_options:
markdown:
wrap: 80
---
```{=html}
<style type="text/css">
.smaller {
font-size: 10px
}
</style>
```
For a more detailed explanation of `SPOTlight` consider looking at our
manuscript:
> Elosua-Bayes M, Nieto P, Mereu E, Gut I, Heyn H.
SPOTlight: seeded NMF regression to deconvolute
spatial transcriptomics spots with single-cell transcriptomes.
*Nucleic Acids Res.* **2021;49(9):e50**. doi: [10.1093](10.1093/nar/gkab043)
# Load packages {.unnumbered}
```{r load-libs, message = FALSE, warning = FALSE}
library(ggplot2)
library(SPOTlight)
library(SingleCellExperiment)
library(SpatialExperiment)
library(scater)
library(scran)
```
# Introduction
## What is `SPOTlight`?
`SPOTlight` is a tool that enables the deconvolution of cell types and cell type
proportions present within each capture location comprising mixtures of cells.
Originally developed for 10X's Visium - spatial transcriptomics - technology, it
can be used for all technologies returning mixtures of cells.
`SPOTlight` is based on learning topic profile signatures, by means of an NMFreg
model, for each cell type and finding which combination of cell types fits best
the spot we want to deconvolute. Find below a graphical abstract visually
summarizing the key steps.

## Starting point
The minimal unit of data required to run `SPOTlight` are:
- ST (sparse) matrix with the expression, raw or normalized, where rows = genes
and columns = capture locations.
- Single cell (sparse) matrix with the expression, raw or normalized,
where rows = genes and columns = cells.
- Vector indicating the cell identity for each column in the single cell
expression matrix.
Data inputs can also be objects of class `r Biocpkg("SpatialExperiment")` (SE),
`r Biocpkg("SingleCellExperiment")` (SCE) objects from
which the minimal data will be extracted.
# Getting started
## Data description
For this vignette, we will use a SE put out by *10X Genomics* containing a
Visium kidney slide. The raw data can be accessed
[here](https://support.10xgenomics.com/spatial-gene-expression/datasets/1.1.0/V1_Mouse_Kidney).
SCE data comes from the [*The Tabula Muris
Consortium*](https://www.nature.com/articles/s41586-020-2496-1) which contains
\>350,000 cells from from male and female mice belonging to six age groups,
ranging from 1 to 30 months. From this dataset we will only load the kidney
subset to map it to the Visium slide.
## Loading the data
Both datasets are available through Biocondcutor packages and can be loaded into R as follows.
`
Load the spatial data:
```{r load-sp, message=FALSE}
library(TENxVisiumData)
spe <- MouseKidneyCoronal()
# Use symbols instead of Ensembl IDs as feature names
rownames(spe) <- rowData(spe)$symbol
```
Load the single cell data. Since our data comes from the
[Tabula Muris Sensis](https://www.nature.com/articles/s41586-020-2496-1)
dataset, we can directly load the SCE object as follows:
```{r load-sc, message=FALSE}
library(TabulaMurisSenisData)
sce <- TabulaMurisSenisDroplet(tissues = "Kidney")$Kidney
```
Quick data exploration:
```{r explo}
table(sce$free_annotation, sce$age)
```
We see how there is a good representation of all the cell types across ages
except at 24m. In order to reduce the potential noise introduced by age and
batch effects we are going to select cells all coming from the same age.
```{r sub-18m}
# Keep cells from 18m mice
sce <- sce[, sce$age == "18m"]
# Keep cells with clear cell type annotations
sce <- sce[, !sce$free_annotation %in% c("nan", "CD45")]
```
# Workflow
## Preprocessing
If the dataset is very large we want to downsample it to train the model, both
in of number of cells and number of genes. To do this, we want to keep a
representative amount of cells per cluster and the most biologically relevant
genes.
In the paper we show how downsampling the number of cells per cell identity to
\~100 doesn't affect the performance of the model. Including \>100 cells per
cell identity provides marginal improvement while greatly increasing
computational time and resources. Furthermore, restricting the gene set to the
marker genes for each cell type along with up to 3.000 highly variable genes
further optimizes performance and computational resources. You can find a more
detailed explanation in the original
[paper](https://academic.oup.com/nar/article/49/9/e50/6129341).
### Feature selection
Our first step is to get the marker genes for each cell type. We follow the
Normalization procedure as described in
[OSCA](http://bioconductor.org/books/3.14/OSCA.basic/normalization.html). We
first carry out library size normalization to correct for cell-specific biases:
```{r lognorm}
sce <- logNormCounts(sce)
```
### Variance modelling
We aim to identify highly variable genes that drive biological heterogeneity.
By feeding these genes to the model we improve the resolution of the biological structure and reduce the technical noise.
```{r variance}
# Get vector indicating which genes are neither ribosomal or mitochondrial
genes <- !grepl(pattern = "^Rp[l|s]|Mt", x = rownames(sce))
dec <- modelGeneVar(sce, subset.row = genes)
plot(dec$mean, dec$total, xlab = "Mean log-expression", ylab = "Variance")
curve(metadata(dec)$trend(x), col = "blue", add = TRUE)
# Get the top 3000 genes.
hvg <- getTopHVGs(dec, n = 3000)
```
Next we obtain the marker genes for each cell identity. You can use whichever
method you want as long as it returns a weight indicating the importance of that
gene for that cell type. Examples include `avgLogFC`, `AUC`, `pct.expressed`,
`p-value`...
```{r mgs}
colLabels(sce) <- colData(sce)$free_annotation
# Compute marker genes
mgs <- scoreMarkers(sce, subset.row = genes)
```
Then we want to keep only those genes that are relevant for each cell identity:
```{r mgs-df}
mgs_fil <- lapply(names(mgs), function(i) {
x <- mgs[[i]]
# Filter and keep relevant marker genes, those with AUC > 0.8
x <- x[x$mean.AUC > 0.8, ]
# Sort the genes from highest to lowest weight
x <- x[order(x$mean.AUC, decreasing = TRUE), ]
# Add gene and cluster id to the dataframe
x$gene <- rownames(x)
x$cluster <- i
data.frame(x)
})
mgs_df <- do.call(rbind, mgs_fil)
```
### Cell Downsampling
Next, we randomly select at most 100 cells per cell identity. If a cell type is
comprised of \<100 cells, all the cells will be used. If we have very
biologically different cell identities (B cells vs. T cells vs. Macrophages vs.
Epithelial) we can use fewer cells since their transcriptional profiles will be
very different. In cases when we have more transcriptionally similar cell
identities we need to increase our N to capture the biological heterogeneity
between them.
In our experience we have found that for this step it is better to select the
cells from each cell type from the same batch if you have a joint dataset from
multiple runs. This will ensure that the model removes as much signal from the
batch as possible and actually learns the biological signal.
For the purpose of this vignette and to speed up the analysis, we are going to
use 20 cells per cell identity:
```{r downsample}
# split cell indices by identity
idx <- split(seq(ncol(sce)), sce$free_annotation)
# downsample to at most 20 per identity & subset
# We are using 5 here to speed up the process but set to 75-100 for your real
# life analysis
n_cells <- 50
cs_keep <- lapply(idx, function(i) {
n <- length(i)
if (n < n_cells)
n_cells <- n
sample(i, n_cells)
})
sce <- sce[, unlist(cs_keep)]
```
## Deconvolution
You are now set to run `SPOTlight` to deconvolute the spots!
Briefly, here is how it works:
1. NMF is used to factorize a matrix into two lower dimensionality matrices
without negative elements. We first have an initial matrix V (SCE count matrix),
which is factored into W and H. Unit variance normalization by gene is performed
in V and in order to standardize discretized gene expression levels,
‘counts-umi’. Factorization is then carried out using the non-smooth NMF method,
implemented in the R package NMF `r CRANpkg("NMF")` (14). This method is
intended to return sparser results during the factorization in W and H, thus
promoting cell-type-specific topic profile and reducing overfitting during
training. Before running factorization, we initialize each topic, column,
of W with the unique marker genes for each cell type with weights. In turn, each
topic of H in `SPOTlight` is initialized with the corresponding membership of each
cell for each topic, 1 or 0. This way, we seed the model with prior information,
thus guiding it towards a biologically relevant result. This initialization also
aims at reducing variability and improving the consistency between runs. \
2. NNLS regression is used to map each capture location's transcriptome in V’
(SE count matrix) to H’ using W as the basis. We obtain a topic profile
distribution over each capture location which we can use to determine its
composition. \
3. we obtain Q, cell-type specific topic profiles, from H. We select all cells
from the same cell type and compute the median of each topic for a consensus
cell-type-specific topic signature. We then use NNLS to find the weights of each
cell type that best fit H’ minimizing the residuals.
You can visualize the above explanation in the following workflow scheme:

```{r SPOTlight}
res <- SPOTlight(
x = sce,
y = spe,
groups = as.character(sce$free_annotation),
mgs = mgs_df,
hvg = hvg,
weight_id = "mean.AUC",
group_id = "cluster",
gene_id = "gene")
```
Alternatively you can run `SPOTlight` in two steps so that you can have the trained model. Having the trained model allows you to reuse with other datasets you also want to deconvolute with the same reference. This allows you to skip the training step, the most time consuming and computationally expensive.
```{r SPOTlight2, eval=FALSE}
mod_ls <- trainNMF(
x = sce,
groups = sce$free_annotation,
mgs = mgs_df,
hvg = hvg,
weight_id = "mean.AUC",
group_id = "cluster",
gene_id = "gene")
# Run deconvolution
res <- runDeconvolution(
x = spe,
mod = mod_ls[["mod"]],
ref = mod_ls[["topic"]])
```
Extract data from `SPOTlight`:
```{r}
# Extract deconvolution matrix
head(mat <- res$mat)[, seq_len(3)]
# Extract NMF model fit
mod <- res$NMF
```
# Visualization
In the next section we show how to visualize the data and interpret
`SPOTlight`'s results.
## Topic profiles
We first take a look at the Topic profiles. By setting `facet = FALSE` we want to
evaluate how specific each topic signature is for each cell identity.
Ideally each cell identity will have a unique topic profile associated to it as
seen below.
```{r plotTopicProfiles1, fig.width=6, fig.height=7}
plotTopicProfiles(
x = mod,
y = sce$free_annotation,
facet = FALSE,
min_prop = 0.01,
ncol = 1) +
theme(aspect.ratio = 1)
```
Next we also want to ensure that all the cells from the same cell identity share
a similar topic profile since this will mean that `SPOTlight` has learned a
consistent signature for all the cells from the same cell identity.
```{r plotTopicProfiles2, fig.width=9, fig.height=6}
plotTopicProfiles(
x = mod,
y = sce$free_annotation,
facet = TRUE,
min_prop = 0.01,
ncol = 6)
```
Lastly we can take a look at which genes the model learned for each topic.
Higher values indicate that the gene is more relevant for that topic.
In the below table we can see how the top genes for `Topic1` are characteristic
for B cells (i.e. *Cd79a*, *Cd79b*, *Ms4a1*...).
```{r basis-dt, message=FALSE, warning=FALSE}
# library(NMF)
sign <- mod$w
# colnames(sign) <- paste0("Topic", seq_len(ncol(sign)))
head(sign)
# This can be dynamically visualized with DT as shown below
# DT::datatable(sign, fillContainer = TRUE, filter = "top")
```
## Spatial Correlation Matrix
See [here](http://www.sthda.com/english/wiki/ggcorrplot-visualization-of-a-correlation-matrix-using-ggplot2)
for additional graphical parameters.
```{r plotCorrelationMatrix, fig.width=9, fig.height=9}
plotCorrelationMatrix(mat)
```
## Co-localization
Now that we know which cell types are found within each spot we can make a graph
representing spatial interactions where cell types will have stronger edges
between them the more often we find them within the same spot.
See [here](https://www.r-graph-gallery.com/network.html) for additional
graphical parameters.
```{r plotInteractions, fig.width=9, fig.height=9}
plotInteractions(mat, which = "heatmap", metric = "prop")
plotInteractions(mat, which = "heatmap", metric = "jaccard")
plotInteractions(mat, which = "network")
```
## Scatterpie
We can also visualize the cell type proportions as sections of a pie chart for
each spot. You can modify the colors as you would a standard `r CRANpkg("ggplot2")`.
```{r Scatterpie, fig.width=9,
gitextract_hdjmozgy/
├── .Rbuildignore
├── .github/
│ └── workflows/
│ └── check-bioc.yml
├── .gitignore
├── DESCRIPTION
├── LICENSE.md
├── NAMESPACE
├── NEWS
├── R/
│ ├── RcppExports.R
│ ├── SPOTlight.R
│ ├── data.R
│ ├── plotCorrelationMatrix.R
│ ├── plotImage.R
│ ├── plotInteractions.R
│ ├── plotSpatialScatterpie.R
│ ├── plotTopicProfiles.R
│ ├── runDeconvolution.R
│ ├── trainNMF.R
│ └── utils.R
├── README.md
├── man/
│ ├── SPOTlight.Rd
│ ├── data.Rd
│ ├── dot-filter.Rd
│ ├── plotCorrelationMatrix.Rd
│ ├── plotImage.Rd
│ ├── plotInteractions.Rd
│ ├── plotSpatialScatterpie.Rd
│ ├── plotTopicProfiles.Rd
│ ├── runDeconvolution.Rd
│ └── trainNMF.Rd
├── src/
│ ├── Makevars
│ ├── Makevars.win
│ ├── RcppExports.cpp
│ └── nmf.cpp
├── tests/
│ ├── testthat/
│ │ ├── test-SPOTlight-steps.R
│ │ ├── test-SPOTlight.R
│ │ ├── test-plotCorrelationMatrix.R
│ │ ├── test-plotImage.R
│ │ ├── test-plotInteractions.R
│ │ ├── test-plotSpatialScatterpie.R
│ │ ├── test-plotTopicProfiles.R
│ │ ├── test-runDeconvolution.R
│ │ ├── test-trainNMF.R
│ │ └── test-utils.R
│ └── testthat.R
└── vignettes/
└── SPOTlight_kidney.Rmd
SYMBOL INDEX (23 symbols across 2 files)
FILE: src/RcppExports.cpp
function RcppExport (line 16) | RcppExport SEXP _SPOTlight_predict_nmf(SEXP A_SEXP, SEXP wSEXP, SEXP L1S...
function RcppExport (line 30) | RcppExport SEXP _SPOTlight_run_nmf(SEXP A_SEXP, SEXP At_SEXP, SEXP tolSE...
function RcppExport (line 53) | RcppExport void R_init_SPOTlight(DllInfo *dll) {
FILE: src/nmf.cpp
class spmat (line 25) | class spmat {
method spmat (line 32) | spmat(Rcpp::NumericVector x, Rcpp::IntegerVector i, Rcpp::IntegerVecto...
method spmat (line 33) | spmat(const Rcpp::S4& s) {
method spmat (line 42) | spmat() {}
method rows (line 44) | size_t rows() { return Dim[0]; }
method cols (line 45) | size_t cols() { return Dim[1]; }
class InnerIterator (line 48) | class InnerIterator {
method InnerIterator (line 50) | InnerIterator(spmat& ptr, int col) : ptr(ptr), col_(col), index(ptr....
method InnerIterator (line 52) | InnerIterator& operator++() {
method row (line 57) | int row() const { return ptr.i[index]; }
function cor (line 67) | inline double cor(Eigen::MatrixXd& x, Eigen::MatrixXd& y) {
function AAt (line 83) | Eigen::MatrixXd AAt(const Eigen::MatrixXd& A) {
function scale (line 92) | void scale(Eigen::MatrixXd& w, Eigen::VectorXd& d) {
function sort_index (line 101) | inline std::vector<int> sort_index(const Eigen::VectorXd& d) {
function reorder_rows (line 109) | inline Eigen::MatrixXd reorder_rows(const Eigen::MatrixXd& x, const std:...
function reorder (line 117) | inline Eigen::VectorXd reorder(const Eigen::VectorXd& x, const std::vect...
function nnls (line 126) | inline void nnls(Eigen::MatrixXd& a, Eigen::VectorXd& b, Eigen::MatrixXd...
function c_predict (line 148) | void c_predict(spmat A, const Eigen::MatrixXd& w, Eigen::MatrixXd& h, co...
function predict_nmf (line 165) | Eigen::MatrixXd predict_nmf(Rcpp::S4& A_, Eigen::MatrixXd& w, const doub...
function run_nmf (line 177) | Rcpp::List run_nmf(const Rcpp::S4& A_, const Rcpp::S4& At_, const double...
Condensed preview — 45 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (206K chars).
[
{
"path": ".Rbuildignore",
"chars": 61,
"preview": "^.*\\.Rproj$\n^\\.Rproj\\.user$\n^_pkgdown\\.yml$\n^docs$\n^pkgdown$\n"
},
{
"path": ".github/workflows/check-bioc.yml",
"chars": 13417,
"preview": "## Read more about GitHub actions the features of this GitHub Actions workflow\n## at https://lcolladotor.github.io/bioct"
},
{
"path": ".gitignore",
"chars": 115,
"preview": ".Rproj.user\n*.Rproj\n.git/\n.DS_Store\n.Rhistory\n.RData\n.Ruserdata\nvignettes/*.html\nvignettes/*/\n*.pdf\n*/**/.pdf\ndocs\n"
},
{
"path": "DESCRIPTION",
"chars": 2056,
"preview": "Package: SPOTlight\nVersion: 1.13.2\nType: Package\nTitle: `SPOTlight`: Spatial Transcriptomics Deconvolution\nDescription: "
},
{
"path": "LICENSE.md",
"chars": 34739,
"preview": "GNU General Public License\n==========================\n\n_Version 3, 29 June 2007_ \n_Copyright © 2007 Free Software Found"
},
{
"path": "NAMESPACE",
"chars": 946,
"preview": "# Generated by roxygen2: do not edit by hand\n\nexport(SPOTlight)\nexport(getMGS)\nexport(mockSC)\nexport(mockSP)\nexport(plot"
},
{
"path": "NEWS",
"chars": 212,
"preview": "v0.99.1 ------------------------------------------------------------------------\n\n- text\n\nv0.99.0 ----------------------"
},
{
"path": "R/RcppExports.R",
"chars": 397,
"preview": "# Generated by using Rcpp::compileAttributes() -> do not edit by hand\n# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD"
},
{
"path": "R/SPOTlight.R",
"chars": 5821,
"preview": "#' @name SPOTlight\n#' @title Deconvolution of mixture using single-cell data\n#'\n#' @description This is the backbone fun"
},
{
"path": "R/data.R",
"chars": 4520,
"preview": "#' @rdname data\n#' @name data\n#' @aliases mockSC mockSP getMGS\n#' @title Synthetic single-cell, mixture and marker data\n"
},
{
"path": "R/plotCorrelationMatrix.R",
"chars": 3745,
"preview": "#' @rdname plotCorrelationMatrix\n#' @name plotCorrelationMatrix\n#' @title Plot Correlation Matrix\n#'\n#' @description Thi"
},
{
"path": "R/plotImage.R",
"chars": 1515,
"preview": "#' @rdname plotImage\n#' @name plotImage\n#' @title Plot JP(E)G/PNG/Raster/RGB images\n#'\n#' @description This function tak"
},
{
"path": "R/plotInteractions.R",
"chars": 5099,
"preview": "#' @rdname plotInteractions\n#' @title Plot group interactions\n#'\n#' @aliases plotHeatmap plotNetwork\n#'\n#' @description "
},
{
"path": "R/plotSpatialScatterpie.R",
"chars": 8066,
"preview": "#' @rdname plotSpatialScatterpie\n#' @name plotSpatialScatterpie\n#' @title Spatial scatterpie\n#'\n#' @description This fun"
},
{
"path": "R/plotTopicProfiles.R",
"chars": 3522,
"preview": "#' @rdname plotTopicProfiles\n#' @name plotTopicProfiles\n#' @title Plot NMF topic profiles\n#'\n#' @description This functi"
},
{
"path": "R/runDeconvolution.R",
"chars": 4069,
"preview": "#' @name runDeconvolution\n#' @rdname runDeconvolution\n#' @title Run Deconvolution using NNLS model\n#'\n#' @aliases runDec"
},
{
"path": "R/trainNMF.R",
"chars": 5619,
"preview": "#' @name trainNMF\n#' @rdname trainNMF\n#' @title train NMF model\n#'\n#' @aliases trainNMF\n#'\n#' @description This is the t"
},
{
"path": "R/utils.R",
"chars": 12986,
"preview": "#' @importFrom sparseMatrixStats rowSds\n.scale_uv <- function(x) {\n sds <- rowSds(x, na.rm = TRUE)\n # TODO find a "
},
{
"path": "README.md",
"chars": 2751,
"preview": "# Welcome to `SPOTlight` <img src=\"inst/extdata/SPOTlight.png\" width=\"200\" align=\"right\"/>\n\n### We are currently on the "
},
{
"path": "man/SPOTlight.Rd",
"chars": 4538,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/SPOTlight.R\n\\name{SPOTlight}\n\\alias{SPOTli"
},
{
"path": "man/data.Rd",
"chars": 1694,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/data.R\n\\name{data}\n\\alias{data}\n\\alias{moc"
},
{
"path": "man/dot-filter.Rd",
"chars": 2314,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/utils.R\n\\name{.filter}\n\\alias{.filter}\n\\ti"
},
{
"path": "man/plotCorrelationMatrix.Rd",
"chars": 2072,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/plotCorrelationMatrix.R\n\\name{plotCorrelat"
},
{
"path": "man/plotImage.Rd",
"chars": 1074,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/plotImage.R\n\\name{plotImage}\n\\alias{plotIm"
},
{
"path": "man/plotInteractions.Rd",
"chars": 2183,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/plotInteractions.R\n\\name{plotInteractions}"
},
{
"path": "man/plotSpatialScatterpie.Rd",
"chars": 2480,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/plotSpatialScatterpie.R\n\\name{plotSpatialS"
},
{
"path": "man/plotTopicProfiles.Rd",
"chars": 1522,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/plotTopicProfiles.R\n\\name{plotTopicProfile"
},
{
"path": "man/runDeconvolution.Rd",
"chars": 2630,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/runDeconvolution.R\n\\name{runDeconvolution}"
},
{
"path": "man/trainNMF.Rd",
"chars": 3368,
"preview": "% Generated by roxygen2: do not edit by hand\n% Please edit documentation in R/trainNMF.R\n\\name{trainNMF}\n\\alias{trainNMF"
},
{
"path": "src/Makevars",
"chars": 308,
"preview": "# Standard portable configuration - let R handle LAPACK/BLAS\nPKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_L"
},
{
"path": "src/Makevars.win",
"chars": 183,
"preview": "PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)\r\nPKG_CXXFLAGS = $(SHLIB_OPENMP_CXXFLAGS) -DEIGE"
},
{
"path": "src/RcppExports.cpp",
"chars": 2655,
"preview": "// Generated by using Rcpp::compileAttributes() -> do not edit by hand\n// Generator token: 10BE3573-1514-4C36-9D1C-5A225"
},
{
"path": "src/nmf.cpp",
"chars": 7583,
"preview": "// This C++ file contains a very fast NMF and NNLS implementation\n//\n// Author: Zach DeBruine (zacharydebruine@gmail.co"
},
{
"path": "tests/testthat/test-SPOTlight-steps.R",
"chars": 1828,
"preview": "library(SPOTlight)\nlibrary(SingleCellExperiment)\n# library(RcppML)\nset.seed(321)\n# mock up some single-cell, mixture & m"
},
{
"path": "tests/testthat/test-SPOTlight.R",
"chars": 9321,
"preview": "set.seed(321)\n# mock up some single-cell, mixture & marker data\nsce <- mockSC(ng = 200, nc = 10, nt = 3)\nspe <- mockSP(s"
},
{
"path": "tests/testthat/test-plotCorrelationMatrix.R",
"chars": 2642,
"preview": "set.seed(321)\nx <- replicate(m <- 25, runif(10, 0, 1))\n# Add an anticorrelated column\nx[, 24] <- seq(0, 1, length.out = "
},
{
"path": "tests/testthat/test-plotImage.R",
"chars": 1453,
"preview": "set.seed(321)\n# x_path <- paste0(system.file(package = \"SPOTlight\"), \"/extdata/image.png\")\n# x_path <- \"../../inst/extda"
},
{
"path": "tests/testthat/test-plotInteractions.R",
"chars": 2850,
"preview": "# helper to record base R plot\n# plotNetwork() ----\n# record base R plot\n. <- \\(.) {\n pdf(NULL)\n dev.control(displ"
},
{
"path": "tests/testthat/test-plotSpatialScatterpie.R",
"chars": 3034,
"preview": "set.seed(321)\n# Coordinates\nx <- matrix(nrow = 10, data = c(seq_len(10), 10:1))\nrownames(x) <- paste0(\"spot\", seq_len(nr"
},
{
"path": "tests/testthat/test-plotTopicProfiles.R",
"chars": 1043,
"preview": "set.seed(123)\nx <- mockSC(nc = 50, nt = 3)\ny <- mockSP(x)\nz <- getMGS(x)\nres <- SPOTlight(\n x,\n y,\n groups = x$"
},
{
"path": "tests/testthat/test-runDeconvolution.R",
"chars": 2761,
"preview": "set.seed(321)\n# mock up some single-cell, mixture & marker data\nsce <- mockSC(ng = 200, nc = 10, nt = 3)\nspe <- mockSP(s"
},
{
"path": "tests/testthat/test-trainNMF.R",
"chars": 5981,
"preview": "set.seed(321)\n# mock up some single-cell, mixture & marker data\nsce <- mockSC(ng = 200, nc = 10, nt = 3)\nspe <- mockSP(s"
},
{
"path": "tests/testthat/test-utils.R",
"chars": 4367,
"preview": "set.seed(321)\n# mock up some single-cell, mixture & marker data\nsce <- mockSC()\nspe <- mockSP(sce)\nmgs <- getMGS(sce)\n\n#"
},
{
"path": "tests/testthat.R",
"chars": 94,
"preview": "set.seed(321)\nlibrary(testthat)\nlibrary(SPOTlight)\n# plotImage() ----\ntest_check(\"SPOTlight\")\n"
},
{
"path": "vignettes/SPOTlight_kidney.Rmd",
"chars": 16354,
"preview": "---\ntitle: \"Spatial Transcriptomics Deconvolution with `SPOTlight`\"\ndate: \"`r BiocStyle::doc_date()`\"\nauthor:\n- name: Ma"
}
]
About this extraction
This page contains the full source code of the MarcElosua/SPOTlight GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 45 files (191.4 KB), approximately 53.5k tokens, and a symbol index with 23 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.