Majorana has been upgraded

In the previous days we performed a complete upgrade of our beloved cluster.
This was necessary as our installation was old and many errors were
present (mainly the recurrent problems with the filesystem).
Most of the cluster is back online (with the major exception of c1
nodes and some c3 nodes). Of course, since everything is new, there
might be some problems to iron out in these first days. Let us know if
you have problems.

Here is a list of the updates:

* We switched from 32 to 64 bits.
Remember to recompile everything, old (32 bits) executables will not work.
Recompiling everything includes deleting every files produced during
the compilation process, otherwise you may still use some old ones. In
other words, remember to “make clean” before “make”. In Argos you need
to execute “ clean”

* The Linux OS is now Rocks Cluster 6.0, based on CentOS 6.3, if you
need to mention it on some papers. You may see on internet that Rocks
6.0 is based on CentOS 6.2, but we upgraded our system further.

* The packages installed have been cleaned a bit. We took the occasion
to not re-install some packages that we believed were not required
anymore: we prefer to manage a system as clean as possible!
If you miss something that you really need, see this page:
Remember that you can see the list of all packages installed (on the
compute nodes) here:

Do not hesitate to ask if you have any further question.
Expect some (small) issues at the beginning, the contrary would be surprising!
Ah! And we managed to keep all your data. :-)

Have a nice week,
the cluster team.

Read More

New switch, c3 nodes and new hi_mem queue

I’m happy to announce that the cluster is up and running. In these two
days we installed the new switch which should fix most of the problems
we were encountering in the past. The new switch also allows us to
have all nodes up and running.

New things:

– Most of the new c3 nodes are up and running. This means 256 new
slots for the users. Some of them are currently disabled as we have a
small problem with the hardware. You can explicitly submit on the c3
nodes using “#$ -l opteron6128″.

– Since we have more nodes we have introduced a new type of queue, the
high memory queue. The hi_mem queue is running on the c0 nodes and
allows you to run programs that use more than 450MB of memory. The
memory limit for this queue is 960MB(soft)/980MB(hard). There are 32
slots available for the hi_mem queue. You can submit jobs on this
queue using “#$ -l hi_mem” in your scripts. The short and long queues
on the c0s are now disabled. ┬áNote that if you don’t specify any queue
(that is, you don’t put any -l in your scripts) the jobs will run in
the first available slot.

Read More