Warning

This documentation is being written on a project under development. Recent changes or mistakes can occur.

NixOSCompose generates and deploys reproducible distributed environment, with a focus on the software stack.

Introduction

Presentation

NixOSCompose is a tool designed for experiments in distributed systems. It generates reproducible distributed environments. Those can then be deploy either in virtualized or physical platform, respectively for local and distributed deployments. It inserts itself in the development cycle of the environments and uses to the notion of transposition to make it faster and easier. The intended workflow is to have fast iteration cycle on a local virtualized distributed system and once the configuration is complete it can be used for physical deployment.

Transposition

Enables users to have a single definition of their environment and to deploy it to different platforms. instead of maintaining multiple configuration for each targeted platforms we are using only one declarative description (called composition).

transposition

The command line tool provides a similar interaction between the different targeted platforms that we call flavours.

NixOS

As seen in the name of the project, NixOS plays an important role here. We exploit the declarative approach for system configuration provided by the NixOS Linux distribution. We also rely on the reproducibility provided by the Nix package manager which helps in the sharing and rerun of experiments.

Current support

Right now NixOSCompose is still in early stage of development, it supports the following list of flavours.

  • local flavours
    • nixos-test
    • docker-compose
    • vm-ramdisk (qemu VMs)
  • distributed flavours
    • g5k-image
    • g5k-ramdisk

In this part we will see how to install NixOS Compose locally and on Grid5000.

The installation and usage differs a bit on Grid5000 due to policy usage restrictions and non Nix first class support.

The use of the nxc command line tool can be achieve in multiple ways and it will depends at which step in the process of the experiment you are. At the initialization of a project you will need Nix package manager and NixOSCompose project. Once an experiment is bundled (after nxc init) it comes with a mean to access to the nxc command through a shell provided by Nix. There is also the possibility to use an incomplete version of nxc when a Nix shell becomes a constraints to access testbeds platform tools, it is the case on Grid5000 (oar commands are not available in a nix shell).

Nix dependant commands : [ init build ]

Nix independent commands : [ start connect stop ]

The installation and usage of NixOSCompose differs in function of the state at which the project you are working on is. In the case of a new project you will want to install the nxc command line tool as it is described in Local Installation. If the project you are working on is already using NixOSCompose because you are developing it or in the case of the re-run of an experiment conducted in the past, you will prefer to use the version of nxc link to the project. Invoking nxc in an embedded way is described in Linked/project embedded nxc

Requirements

Quick note from NixOS wiki to activate flake feature

  • Non-NixOS

    Edit ~/.config/nix/nix.conf to add this line :

    experimental-features = nix-command flakes
    
  • NixOS

    Edit your configuration by adding the following options

    { pkgs, ... }: {
        nix = {
            package = pkgs.nixFlakes; # or versioned attributes like nixVersions.nix_2_8
            extraOptions = ''
                experimental-features = nix-command flakes
            '';
        };
    }
    

Configuration requirements

On NixOS you need to enable Docker. To avoid compatibility issue with cgroupv2, it is also needed to force cgroupv1 with the option systemd.enableUnifiedCgroupHierarchy.

# Docker enabled + cgroup v1
virtualisation.docker.enable = true;
systemd.enableUnifiedCgroupHierarchy = false;

Local installation

The following commands will drop you in a shell where the nxc command is available and all required runtime dependencies (docker-compose, vde2, tmux, qemu_kvm).

git clone https://gitlab.inria.fr/nixos-compose/nixos-compose.git
cd nixos-compose
nix develop .#nxcShell

Alternative

You can take advantage of the full potential of Nix's flakes. The following command will drop you in the same shell without having to clone the repository.

nix develop https://gitlab.inria.fr/nixos-compose/nixos-compose.git#nxcShell

Tip

Writting the full url is not really practical, an "alias" can be used.

nix registry add nxc git+https://gitlab.inria.fr/nixos-compose/nixos-compose.git

The command becomes :

nix develop nxc#nxcShell

Project embedded nxc

A project that is already using NixOSCompose in its experiments process provides an easy access to a shell that gives access to the nxc tool and its runtime dependencies if needed. This is achieved thanks to Nix and its flakes feature. By default, a project has a line in its flake.nix similar to this :

devShell.${system} = nxc.devShells.${system}.nxcShell;

It exposes the shell of NixOSCompose as in the previous section but with the specific revision thanks to the flake.lock file. The shell is accessible with the command nix develop. It is useful to explore what shells are available in the project, to list them you use nix flake show. Then to access the devShell of your choice use this command :

nix develop .#nxcShell

Info

Two shells availables :

  • nxcShellLite

    • python app nxc
  • nxcShell

    • python app nxc
    • docker-compose
    • vde2
    • tmux
    • qemu_kvm

Grid5000

On the Grid5000 testbed experiment platform there is no first class support for Nix (right now). NixOSCompose relies at least on the presence of the Nix store on the frontend commands that are not init and build and access to the Nix package manager for those two. To have it, we need to use some workarounds. Except for the init and build commands (see details here), that will requiere a local installation of Nix followed by a chroot, the rest of the commands are only relying on Python. It means that for building your distributed environments you will need the reservation of ressource

Nix installation on frontend

We are going to use the script nix-user-chroot.sh. It provides Nix without root permissions on the frontend. At its first call it will install the Nix package manager and create all required folders in your home folder (e.g. ~/.nix/store/) then change the apparent root directory so that the nix command works as expected. All subsequent call to this script will only do the second part followed by the activation of the nixcommand.

curl -L -O https://raw.githubusercontent.com/oar-team/nix-user-chroot-companion/master/nix-user-chroot.sh
chmod +x nix-user-chroot.sh

On a interactive session on a node (using oarsub -I)

login@dahu-17:~$ ./nix-user-chroot.sh 
...
Activate Nix
login@dahu-17:~$

Note

The call nix-user-chroot.sh can be done either on the frontend or on a node. Doing it on the frontend is practical when you want to exploit the Nix store (e.g. when using nix shells) but we advise to avoid building phases on the frontend (any commands that will populate the Nix store).

TODO interesting but not clear enough...

Installation of NixOSCompose

The threee following sections describe alternative approaches to install NixOSCompose.

Local installation with a poetry shell

Instal poetry locally in your home

curl -sSL https://install.python-poetry.org | python3 -

Clone NixOSCompose project then install all python dependencies thanks to Poetry. Finally activate the virtualenv.

TODO avec le changement de l'arborescence des dossiers les commandes ci dessous vont changer... cd nixos-compose/src ou autre.

git clone https://gitlab.inria.fr/nixos-compose/nixos-compose.git
cd nixos-compose
poetry install
poetry shell

For further use only poetry shell is needed.

Python virtualenv with poetry then poetry shell

Similar to the method above but avoid the poetry instalation in your home directory and instead does it in a virtualenv.

git clone https://gitlab.inria.fr/nixos-compose/nixos-compose.git
cd nixos-compose
python3 -m venv /path/to/nxcEnv
. /path/to/nxcEnv/bin/activate
pip install poetry
poetry install
deactivate

Drops you in the virtualenv with NixOSCompose available.

Nix chroot on frontal with nix develop

As seen in the local installation it is preferable to use the same nxc version that the one used in the project. The downside of this method is that you are loosing access to the specific commands of the platform (e.g. oarsub command is not acessible in a nix-shell) so you need either do your reservation of ressources before entering those commands or use multiple terminal or a terminal multiplxer (for instance tmux).

./nix-user-chroot.sh #activate Nix
nix develop .#nxcShellLite

The intent of this guide is to go through the different commands made available by nxc. We will depict the workflow while you are setting up your environments or running an experiment. We are not yet going in detail into the content of a composition neither how to write it.

Todo

meant to be iteractive, multiple run done locally

Local usage

It is convenient to work locally during the development phase or during the configuration of the software environment, as it allows for fast development cycles. NixOSCompose allows to quickly iterate with docker containers or VMs, and it avoids using testbed platforms during early testing phases.

Later, when our composition is ready, we will deploy it to Grid5000.

Initialization of a project

There are some templates (using the template mechanism of Nix flakes) that provide the boilerplate to start a project. You can do this either with a locally available NixOSCompose or with the nix flake command. It will copy all necessary files. You can use these commands on a new folder or for your existing project folder.

For the following, we are using the basic template. It is a composition that describes an evironment composed of one node called foo that contains nothing.

  • Nix flake's template

    To avoid writting the full path to the NixOSCompose flake we are using the Nix registries.

    nix registry add nxc git+https://gitlab.inria.fr/nixos-compose/nixos-compose.git
    

    So now in any Nix flake related command writting nxc# will be equivalent to git+https://gitlab.inria.fr/nixos-compose/nixos-compose.git#.

    # initialize current folder
    nix flake init -t nxc#basic
    # or initialize projectFolder
    nix flake new projectFolder -t nxc#basic
    
  • local NixOSCompose

    Using your locally installed NixOSCompose the commands are the following.

    cd nixos-compose
    nix develop .#nxcShellLite
    cd path/to/projectFolder
    nxc init -t basic
    

    You can then quit the nix-shell provided by the command nix develop with Ctrl-d.

At the end you end up with this:

$ tree projectFolder
projectFolder
├── composition.nix
├── flake.nix
└── nxc.json

0 directories, 3 files

These 3 files are the minimum requiered by NixOSCompose.

  • flake.nix is the entry point for a Nix project, it defines all the dependencies and exposes differents outputs, like devShells or packages.
  • composition.nix where the configuration of the roles needed by your experiment is described.
  • nxc.json defines a few default variables for NixOSCompose.

All those files need to be tracked by Git. Nix flakes even require it to work properly.

git init && git add *

If your project is already using git you just need to add those files.

git add flake.nix composition.nix nxc.json

Overview of composition.nix

Nodes are listed under the field nodes and we can see that we have one node called foo.

{ pkgs, ... }: {
nodes = {
    foo = { pkgs, ... }:
    {
        # add needed package
        # environment.systemPackages = with pkgs; [ socat ];
    };
};
testScript = ''
    foo.succeed("true")
'';
}

Local development and usage

First enter the nxc shell. Here we choose nxcShell because it provides all necessary dependencies to run the composition locally (e.g. docker-compose).

cd path/to/projectFolder
nix develop .#nxcShell
The first call to `nix develop` can take quite some time because it will fetch and/or build all depedencies for the `nxc` shell. Later calls to this command would be faster because everything will already be present in the _Nix_ store (on condition that the flake's inputs did not change).

Building

The composition can then be built. The command below will evaluate the composition file then build the necessary artifacts for a local deployment using docker containers. The files generated at build time for the different flavours are put in a new build folder. As we are building the docker flavour, it hosts the docker-compose file for our composition.

nxc build -f docker
If there is an error in the nix code this is when it would show up.

Local deployment

1. Start

The local deployment is done thanks to the command below. The option -f docker says that we explicitly want to deploy the composition with the docker flavour. This option is optional. If called without it, the command would choose the most recently built flavour.

nxc start -f docker

You can check that the corresponding container has been launched.

$ docker ps --format '{{.Names}}'
docker_compose_foo_1
The `build` folder does not need to be tracked by git.

2. Interact

You can connect to the node with

nxc connect foo

which will open a shell in the container foo.

3. Stop

Lastly the virtualized environment can be stopped with the following command. It stops the container previously launched.

nxc stop

Edition and test loop

The three steps above plus the editing of the composition create a convenient "Edit - Test" loop. It allows to quickly iterate on the set up of the environments. At some point the configurations converge to something satisfactory and physicial deployment is the next step.

Physical deployment on Grid5000

At first you need to import your project to your home folder on Grid5000 with your prefered method (rsync, git, ...).

Building on Grid5000

This phase requires the reservation of a node in interactive mode with oarsub -I. Once you have access to the node you need to activate Nix with the script nix-user-chroot.sh (see grid5000 install)

./nix-user-chroot.sh

Enter the nix-shell that provides the nxc command.

cd path/to/project
nix develop .#nxcShellLite

You can now to build the g5k-ramdisk flavour.

nxc build -f g5k-ramdisk

Since the Nix store is shared between the frontend and the build node, all created artifacts will then be available from the frontend for the deployment. Everything will stay accessible from the build folder.

Once the building is done you can release the Grid5000 ressource.

Deployment

The first step is to claim the ressources needed by the project, here only 1 node.

enter virtual env that contains nxc

TODO: add the virtualenv version
cd path/to/nixos-compose
poetry shell

Ressource reservation

The reservation through the command line needs the command below, it requests one node for 30 minutes. At the same time it lists the machines in the stdout file associated to the reservation ( OAR.<oar_job_id>.sdtout ) and defines the $OAR_JOB_ID environment variable. This information are needed for the next step.

cd path/to/project
export $(oarsub -l nodes=1,walltime=0:30 "$(nxc helper g5k_script) 30m" | grep OAR_JOB_ID)

Error rendering admonishment

Failed with: TOML parsing error: expected an equals, found an identifier at line 1 column 9

Original markdown input:

```admonish warning tip abstract 
The command above asks OAR for some ressources, then execute a script that sends the user public key to the nodes.
~~~console
export $(oarsub -l nodes=<NODES>,walltime=<TIME1> "$(nxc helper g5k_script) <TIME2>" | grep OAR_JOB_ID)
~~~
- NODES : number of nodes needed for the experiment.
- TIME1 : duration of the reservation with `h:minute:second` syntax (see [Grid5000 wiki]())
- TIME2 : duration of the `sleep` command sended to the nodes, usualy same lenght as the reservation duration. Syntax available in [coreutils documentation](https://www.gnu.org/software/coreutils/manual/html_node/sleep-invocation.html#sleep-invocation)
```

Once the ressoruces are available the OAR.$OAR_JOB_ID.stdout file is created.

$ cat OAR.$OAR_JOB_ID.stdout
dahu-7.grenoble.grid5000.fr

Actual deployment

At this step we have ressources available and the composition has been built in the desired flavour, g5k-ramdisk. We can now launch the deployment. The command is similar to a local virtualized deployment exept for the option -m that requiered a file that list all remote machines covered by our deployment. If you used the command seen at the previous step the list of machine is in the OAR.$OAR_JOB_ID.stdout file.

nxc start -f g5k-ramdisk -m OAR.$OAR_JOB_ID.stdout

Interact

Similarly than with docker run locally, the command below will open a shell in the foo node.

nxc connect foo

Release of ressources

Once you finished with the node you can release the ressources.

oardel $OAR_JOB_ID

tuto on pythoneri insert script bash ...

Here we complete the quick start guide that was focused on the CLI, we will go further in detail with how to write the composition file, how to import personal software and the different files of a NixOSCompose project.

~ Here we will go through a complete workflow with the local test deployment with docker and a Grid5000 deployment of a Nginx server and a test client.

  • initialization
    • review of the files
    • launch/test
  • Edit the composition
    • add a benchmark tool
    • launch/test
    • add a custom script ?

Initialization

First let's create a folder for our project locally, we are creating it with the template mechanism of Nix flakes. The template that we are importing is a client/server architecture, the server hosts a Nginx webserver.

nix flake new webserver -t nxc#webserver

Info

To avoid writing the full path to the NixOSCompose flake we are using the Nix registries. shell nix registry add nxc git+https://gitlab.inria.fr/nixos-compose/nixos-compose.git

If we inspect the content of our new created folder we obtain this.

$ tree webserver
webserver/
├── composition.nix
├── flake.nix
└── nxc.json

0 directories, 3 files

Description of the files

flake.nix

A file that is key in the reproducibility of project. It defines all the dependencies of a project and what it is able to provide as outputs You can learn a bit onto how a flake file works here.

This file manages the dependencies and the outputs of a project. It has multiple fields :

  • description
  • inputs
  • outputs

The description is a string that describes the flake.

The inputs is a set defining

{
  description = "nixos-compose - basic webserver setup";

  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
    nxc.url = "git+https://gitlab.inria.fr/nixos-compose/nixos-compose.git";
  };

  outputs = { self, nixpkgs, nxc }:
  let
    system = "x86_64-linux";
  in {
    packages.${system} = nxc.lib.compose {
      inherit nixpkgs system;
      composition = ./composition.nix;
    };

    defaultPackage.${system} =
      self.packages.${system}."composition::vm";

    devShell.${system} = nxc.devShells.${system}.nxcShellLite;
  };
}
  • composition.nix
    { pkgs, ... }: {
    nodes = {
        server = { pkgs, ... }: {
        services.nginx = {
            enable = true;
            # a minimal site with one page
            virtualHosts.default = {
            root = pkgs.runCommand "testdir" { } ''
                mkdir "$out"
                echo hello world > "$out/index.html"
            '';
            };
        };
        networking.firewall.enable = false;
        };
        client = { ... }: { };
    };
    testScript = ''
        server.wait_for_unit("nginx.service")
        client.wait_for_unit("network.target")
        assert "hello world" in client.succeed("curl -sSf http://server/")
    '';
    }
    
  • nxc.json
    {"composition": "composition.nix", "default_flavour": "vm"}
    

Multi-composition

Sometimes you have two or more compositions that have a lot in common. For example, you want to test a tool with different integration, or run performance tests on multiple similar tools, etc...

To do so, NixOSCompose provides a simple mechanism that allows you to create a multi-composition.

Here is a simple example of a composition.nix file:

{
  oar = import ./oar.nix;
  slurm = import ./slurm.nix;
}

Each *.nix file being a composition file itself. For example, the oar.nix might look like:

{ pkgs, ... }: {
  roles =
    let
      commonConfig = import ./common_config.nix { inherit pkgs; };
    in
    {
      server = { ... }: {
        imports = [ commonConfig oarConfig ];
        services.oar.server.enable = true;
        services.oar.dbserver.enable = true;
      };

      node = { ... }: {
        imports = [ commonConfig oarConfig ];
        services.oar.node.enable = true;
      };
    };

  rolesDistribution = { node = 2; };
}

This compositions can be built and started with the VM flavour using the -C or --composition-flavour option:

nxc build -C oar::vm
nxc start -C oar::vm

Import Flakes

If you want to use packages, modules, or libraries form another Nix Flake, you can make it available in your composition using an overlay.

In order to add this overlay, you have to edit the flake.nix file and add your flake as input. For example:

  inputs = {
    # ...
    myFlake.url = "github:myTeam/myFlake";
  };

Here is how you can add an extra package using an overlay or add a NixOS module using the extraConfigurations parameter:

  outputs = { self, nixpkgs, nxc, myFlake }:
    let
      system = "x86_64-linux";
    in
    {
      packages.${system} = nxc.lib.compose {
        inherit nixpkgs system;

        # Use this to make a NixOS module available
        extraConfigurations = [ myFlake.nixosModules.myModule ];

        # Use this to make a Nix package available
        overlays = [
           (self: super: {
             myTool = myFlake.packages.${system}.myTool;
           })
        ];
        setup = ./setup.toml;
        compositions = ./composition.nix;
      };
    };
    # ...

Note

For more details on overlays, checkout the Nixpkgs documentation on Overlays

You can now use your package or your module in your composition just like the ones present in nixpkgs, for example:

{ pkgs }:
{
  roles = {
    node1 = {
      services.myService.enabled = true;
    }
    node2 = {
      environment.systemPackages = with pkgs; [ pkgs.myTool ];
    }
  };
}

nxc init

Initialize a new environment.

Usage

nxc init [OPTIONS]

Options

  • --no-symlink Disable symlink creation to nxc.json (need to change directory for next command Default: False

  • -n, --disable-detection Disable platform detection. Default: False

  • -f, --default-flavour Set default flavour to build, if not given nixos-compose try to find a good

  • --list-flavours-json List description of flavours, in json format Default: False

  • -F, --list-flavours List available flavour Default: False

  • -t, --template Use a template Default: basic

  • --use-local-templates Either use the local templates or not Default: False

  • --list-templates-json Display the list of available templates as JSON Default: False

  • --help Show this message and exit. Default: False

nxc build

Builds the composition.

It generates a build folder which stores symlinks to the closure associated to a composition. The file name of the symlink follows this structure [composition-name]::[flavour]

Examples

  • nxc build -t vm

    Build the vm flavour of your composition.

  • nxc build -C oar::g5k-nfs-store

    Build the oar composition with the g5k-nfs-store flavour`.

Usage

nxc build [OPTIONS] [COMPOSITION_FILE]

Options

  • composition_file

  • --nix-flags add nix flags (aka options) to nix build command, --nix-flags "--impure"

  • --out-link, -o path of the symlink to the build result

  • -f, --flavour Use particular flavour (name or path)

  • -F, --list-flavours List available flavour Default: False

  • --show-trace Show Nix trace Default: False

  • --dry-run Show what this command would do without doing it Default: False

  • --dry-build Eval build expression and show store entry without building derivation Default: False

  • -C, --composition-flavour Use to specify which composition and flavour combination to build when multiple compositions are describe at once (see -L options to list them).

  • -L, --list-compositions-flavours List available combinations of compositions and flavours Default: False

  • -s, --setup Select setup variant

  • -p, --setup-param Override setup parameter

  • -u, --update-flake Update flake.lock equivalent to: nix flake update Default: False

  • --monitor Build with nix-output-monitor Default: False

  • --help Show this message and exit. Default: False

nxc start

Starts a set of nodes using the previous build.

ROLE_DISTRIBUTION_FILE is and optional YAML file describing how many instance of each role are expected.

Examples

  • nxc start

    Start the last built composition.

  • nxc start role-distrib.yaml

    With the file role-distrib.yaml written as this:

    nfsServerNode: 1
    nfsClientNode: 2
    

    Instantiates two nodes with the role nfsClientNode and one only with the role nfsServerNode. Of course, these roles have to be described beforehand in a composition.nix file.

Usage

nxc start [OPTIONS] [ROLES_DISTRIBUTION_FILE]

Options

  • -I, --interactive drop into a python repl with driver functions Default: False

  • -m, --machine-file file that contains remote machines names to (duplicates are considered as one).

  • -W, --wait-machine-file wait machine-file creation Default: False

  • -s, --ssh specify particular ssh command Default: ssh -l root

  • -S, --sudo specify particular sudo command Default: sudo

  • --push-path remote path where to push image, kernel and kexec_script on machines (use to re-kexec)

  • --reuse supposed a previous succeded start (w/ root access via ssh) Default: False

  • --remote-deployment-info deployement info is served by http (in place of kernel parameters) Default: False

  • --port Port to use for the HTTP server Default: 0

  • -c, -C, --composition specify composition, can specify flavour e.g. composition::flavour

  • -f, --flavour specify flavour

  • -t, --test-script execute testscript Default: False

  • --file-test-script alternative testscript

  • -w, --sigwait wait any signal to exit after a start only action (not testscript execution or interactive use Default: False

  • -k, --kernel-params additional kernel parameters, this option is flavour dependent

  • -r, --role-distribution specify the number of nodes or nodes' name for a role (e.g. compute=2 or server=foo,bar ).

  • roles_distribution_file

  • --compose-info specific compose info file

  • -i, --identity-file path to the ssh public key to use to connect to the deployments

  • -s, --setup Select setup variant

  • -p, --parameter Parameter added to deployment file (for contextualization phase)

  • -P, --parameter-file Json file contains parameters added to deployment file (for contextualization phase)

  • -d, --deployment-file Deployement json file use for the deployment (skip generation) Warning parametrization not supported (upto now)

  • --ip-range IP range (for now only usable with nspawn flavour) Default: ``

  • --help Show this message and exit. Default: False

nxc connect

Opens one or more terminal sessions into the deployed nodes. By default, it will connect to all nodes, but we can specify which ones to connect to.

To connect to several machines at once. For this, we use a tmux (terminal multiplexer) session. Feel free to refer to the tmux documentation (or its cheatsheet), especially for the shortcuts to navigate between the different tabs.

Examples

  • nxc connect

    Open a Tmux session with on panel for each node.

  • nxc connect server

    Connect to the server node. It runs on the current shell (Tmux is not used in this case)

Usage

nxc connect [OPTIONS] [HOST]...

Options

  • -l, --user

    Default: root

  • -g, --geometry Tmux geometry, 2 splitting indications are supported: +/, examples: "1+3+2" (3 adjacent panes respectively horizontally splited by 1,3 and 2), "23" (2 adjacent panes horizontally splitted by 3)

  • -d, --deployment-file Deployment file, take the latest created in deploy directory by default

  • -f, --flavour flavour, by default it's extracted from deployment file name

  • -i, --identity-file path to the ssh public private used to connect to the deployments

  • -pc, --pane-console Add a pane console Default: False

  • host

  • --help Show this message and exit. Default: False

nxc driver

Run the driver to execute the given script to interact with the deployed environment. The script is a python script similar to nixos-test script. See the NixOS manual on nixos-tests for more details.

Warning

Be aware that unlike Nixos-test that only support virtual machines, nxc supports many flavours and VM specific features are not supported.

Examples

  • nxc driver -t

    Run the script defined in the composition

Usage

nxc driver [OPTIONS] [TEST_SCRIPT_FILE]

Options

  • -l, --user

    Default: root

  • -d, --deployment-file Deployment file, take the latest created in deploy directory by default

  • -f, --flavour flavour, by default it's extracted from deployment file name

  • -t, --test-script execute the 'embedded' testScript Default: False

  • test-script-file

  • --help Show this message and exit. Default: False

nxc stop

Stop Nixos composition.

Usage

nxc stop [OPTIONS]

Options

  • -f, --flavour specify flavour

  • -d, --deployment-file specify deployment

  • --help Show this message and exit. Default: False

nxc clean

Clean the nxc folder and nxc.json file

Usage

nxc clean [OPTIONS]

Options

  • --help Show this message and exit. Default: False

nxc helper

Specific and contextual helper information (e.g. g5k_script path for Grid'5000) Warning: Experimental command, may be removed in the future or change without backward compatibility care.

Usage

nxc helper [OPTIONS] [OPTIONS]...

Options

  • -l, --list List of available helpers Default: False

  • options

  • --help Show this message and exit. Default: False

Navigating the code

As we do not yet have a complete developper documentation, it is still useful to mention where the entry points of the software are. The nxc commands invoque nixos_compose/cli.py which in turns calls one of the files in nixos_compose/commands based on the command verb it was provided. Here, command verb is to be understood as a single word following the nxc characters after a space, like is done nowadays with several command line tools. For instance the command verb for nxc connect is connect, which will be handled in nixos_compose/commands/cmd_connect.py.

You will find also that a ctx variable is extensively used. It refers to an instance of the Context object defined in nixos_compose/context.py.

Developer Documentation

Generate the CLI reference documentation

To generate the reference documentation directly from the code use this command:

python ./docs/tool/generate_md_doc.py dumps --baseModule=nixos_compose.cli --baseCommand=nxc --docsPath=./docs/src/references/commands