Add another compute node
For the moment we have a single compute node. In this section, we will add another one and run IOR on several nodes.
Add another role in the composition
We will rename the role node
into node1
and create a new role node2
with the exact same configuration:
roles = {
node1 = { pkgs, ... }:
{
# add needed package
environment.systemPackages = with pkgs; [ openmpi ior glusterfs ];
# Disable the firewall
networking.firewall.enable = false;
# Mount the PFS
fileSystems."/data" = {
device = "server:/gv0";
fsType = "glusterfs";
};
};
node2 = { pkgs, ... }:
{
# add needed package
environment.systemPackages = with pkgs; [ openmpi ior glusterfs ];
# Disable the firewall
networking.firewall.enable = false;
# Mount the PFS
fileSystems."/data" = {
device = "server:/gv0";
fsType = "glusterfs";
};
};
server = { pkgs, ... }:
{ # ... };
}
Building
nxc build -f g5k-nfs-store
Deploying
Reserving the resources
export $(oarsub --project lab-2025-compas-nxc -l nodes=3,walltime=1:0:0 "$(nxc helper g5k_script) 1h" | grep OAR_JOB_ID)
Deploying
nxc start -m OAR.$OAR_JOB_ID.stdout -W
Connect to the nodes
nxc connect
Remount the volume from the node
s (run this command once from any of the node):
cat /etc/hosts | grep node | cut -f2 -d" " | xargs -t -I{} systemctl --host root@{} restart data.mount
After building and starting the environment, we now have 3 nodes: node1
, node2
and the server
.
We can now try to run IOR with MPI from the node
s and writing on the PFS (/data
).
All the deployed machines already know each others (you can look at /etc/hosts
to verify).
So we will create the MPI hostfile in myhosts
:
cd /data
printf "node1 slots=8\nnode2 slots=8" > myhosts
The /data/myhosts
file should look like:
node1 slots=8
node2 slots=8
Now, from any node (node1
or node2
), we can run start the benchmark (without the high performance network of Grid'5000) with:
cd /data
mpirun --mca pml ^ucx --mca mtl ^psm2,ofi --mca btl ^ofi,openib --allow-run-as-root --hostfile myhosts -np 16 ior
Release the nodes
oardel $OAR_JOB_ID