Nextflow is a scientific workflow system predominantly used for bioinformatic data analysis. It establishes standards for programmatically creating a series of dependent computational steps and facilitates their execution on various local and cloud resources.[1][2]
Purpose
Many scientific data analyses require a significant amount of sequential processing steps. Custom scripts may suffice when developing new methods or infrequently running particular analyses, but scale poorly to complex task successions or many samples.[3][4][5]
Scientific workflow systems like Nextflow allow formalizing an analysis as a data analysis pipeline. Pipelines, also known as workflows, the specify order and conditions of computing steps. They are accomplished by special purpose programs, so-called workflow executors, which ensure predictable and reproducible behavior in various computing environments.[3][6][7][8]
Workflow systems also provide built-in solutions to common challenges of workflow development, such as the application to multiple samples, the validation of input and intermediate results, conditional execution of steps, error handling, and report generation. Advanced features of workflow systems may also include scheduling capabilities, graphical user interfaces for monitoring workflow executions, and the management of dependencies by containerizing the whole workflow or its components.[9][10]
Typically, scientific workflow systems initially present a steep learning challenge as all their features and complexities are built on in addition to the actual analysis. However, the standards and abstraction imposed by workflow systems ultimately improve the traceability of analysis steps, which is particularly relevant when collaborating on pipeline development, as is customary in scientific settings.[11]
Characteristics
Specification of workflows
In Nextflow, pipelines are constructed from individual processes that work in parallel to perform computational tasks. Each process is defined with input requirements and output declarations. Instead of running in a fixed sequence, a process starts executing when all its input requirements are fulfilled. By specifying the output of one process as the input of another, a logical and sequential connection between processes is established.[12]
Processes and entire workflows are programmed in a domain-specific language (DSL) which is provided by Nextflow which is based on Apache Groovy.[14] While Nextflow's DSL is used to declare the workflow logic, developers can use their scripting language of choice within a process and mix multiple languages in a workflow. It is also possible to port existing scripts and workflows to Nextflow. Supported scripting languages include bash, csh, ksh, Python, Ruby, and R. Any scripting language that uses the standard Unix shebang declaration (#!/bin/bash) is compatible with Nextflow.
Below is an example of a workflow consisting of only one process:
Nextflow's DSL allows you to deploy and run workflows across different computing environments without having to modify the pipeline code. Nextflow comes with specific executors for various platforms, including major cloud providers. It supports the following environments for pipeline execution:[16]
Local: This is the default executor where Nextflow pipelines run on Linux or Mac OS, and the execution occurs on the computer where the pipeline is launched.
HPC workload managers: Nextflow supports workload managers such as Slurm, SGE, LSF, Moab, PBS Pro, PBS/Torque, HTCondor, NQSII, and OAR.
Kubernetes: Nextflow can be used with local or cloud-based Kubernetes implementations (GKE, EKS, or AKS).
Cloud batch services: It is compatible with AWS Batch[17] and Azure Batch[18]
Other environments: Nextflow can also be used with Apache Ignite, Google Life Sciences, and various container frameworks for portability.[19]
Containers for portability across computing environments
In Nextflow, there is tight integration with software containers. Workflows and single processes can utilize containers for their execution across different computing environments, eliminating the need for complex installation and configuration routines.[3][20]
Nextflow supports container frameworks such as Docker, Singularity, Charliecloud, Podman, and Shifter. These containers can be automatically retrieved from external repositories when the pipeline is executed. Additionally, it was revealed at Nextflow Summit 2022 that future versions of Nextflow will support a dedicated container provisioning service for better integration of customized containers into workflows.[21][22]
Developmental history
Nextflow was originally developed at the Centre for Genomic Regulation in Spain and released as an open-source project on GitHub in July 2013.[23] In October 2018, the project license for Nextflow was changed from GPLv3 to Apache 2.0.[24]
In July 2018, Seqera Labs was launched as a spin-off from the Centre for Genomic Regulation.[20] The company employs many of Nextflow's core developers and maintainers and provides commercial services and consulting with a focus on Nextflow. [25]
In July 2020, a major extension and revision of Nextflow's domain-specific language was introduced to allow for sub-workflows and additional improvements.[26] In the same year, monthly downloads of Nextflow reached approximately 55,000.[20]
Adoption and reception
The nf-core community
The nf-core project has been adopted by several sequencing facilities including the Centre for Genomic Regulation,[27] the Quantitative Biology Center in Tübingen, the Francis Crick Institute, A*STAR Genome Institute of Singapore, and the Swedish National Genomics Infrastructure as their preferred Scientific workflow system.[20] These facilities have collaborated to share, harmonize, and curate bioinformatic pipelines,[28][29][30][31] leading to the creation of the nf-core project.[32] Led by Phil Ewels, at the Swedish National Genomics Infrastructure at the time,[33][34] nf-core focuses on ensuring reproducibility and portability of pipelines across different hardware, operating systems, and software versions. In July 2020, Nextflow and nf-core received a grant from the Chan Zuckerberg Initiative in recognition of their importance as open-source software.[35] As of 2024, the nf-core organization hosts 117 Nextflow pipelines for the biosciences and more than 1382 process modules. With more than 1200 developers and scientists involved, it is the largest collaborative effort and community for developing bioinformatic data analysis pipelines.[36]
By domain and research subject
Nextflow is the preferred tool for processing sequencing data and conducting genomic data analysis by domain and research subject. Over the past five years, numerous pipelines have been published for various applications and analyses in the genomics field.
One notable use case is its role in pathogen surveillance during the COVID-19 pandemic.[37] Swift and highly automated processing of raw data, variant analysis, and lineage designation were essential for monitoring the emergence of new virus variants and tracing their global spread. Nextflow-enabled pipelines played a crucial role in this effort.[38][39][40][41][42][43][44]
Nextflow also plays a significant role for the non-profit plasmid repository Addgene, using it to confirm the integrity of all deposited plasmids.[45]