Grimage

From Digitalis

Revision as of 12:54, 21 February 2014 by Neyron (Talk | contribs)
Jump to: navigation, search

Contents

Overview
Grimage 10GE network

The Grimage cluster was originally dedicated to support the Grimage VR platform: handle hardware (cameras, etc) and process data (videos captures, etc).

More recently, 10GE ethernet cards were added to some nodes for a new project, making the cluster a mutualized platform (multi-project). Currently, at least 4 projects are using the cluster, requiring a resource management system and deployment system adapted to an experimental platform, just like Grid'5000.

Grimage nodes have big computer cases (4U), with the purpose of being able to host various hardware.

By design, the hardware configuration of the Grimage nodes is subject to changes
  • new generation of video (GPU) cards may be installed over time
  • 10GE network connections may change
  • ...
Current 10GE network setup is as follows
  • One Myricom dual port card is installed on each of grimage-{4,5,7,8}
  • One Intel dual port card is installed on each of grimage-{2,5,6,7}

Connexions are point to point (NIC to NIC, no switch) as follows:

  • Myricom: grimage-7 <-> grimage-8 <-> grimage-4 <-> grimage-5
  • Intel: grimage-2 <=> grimage-5 et grimage-6 <=> grimage-7 (double links)

How to experiment

The default system of the grimage node is design to operate the Grimage VR room.

Using kadeploy is required to adapt the system to other needs (if the default system is not sufficient).

Privileged commands

Currently, the following commands can be run via sudo in exclusive jobs:

  • sudo /usr/bin/whoami (provided for testing the mechanism, should return "root")
  • sudo /sbin/reboot
  • sudo /usr/bin/schedtool
  • sudo /usr/bin/nvidia-smi

System changelog

To be completed.

Personal tools
platforms