Tuesday, September 06, 2011

Installing and Deploying a Cluster Publisher

As part of the battle to replace out LCG-CEs with CreamCEs I realised that the reason one of our new CreamCEs was not getting many jobs was because it was not publishing a cluster/subcluster into the BDII (despite having a /var/lib/bdii/gip/static-file-Cluster.ldif file) and so I guess wasn't matching any resources.

Since, I eventually wanted to go to a stand alone Cluster Publisher I thought it would be easiest to push ahead and install that rather than try to install one one the CreamCE and remove it later.

So with a shiny new VM in hand and certificate I plunged onwards.

First step was to define the cluster variables in site-info.def (or in this case a specific node file):

cat /opt/glite/yaim/etc/nodes/heplnv146.pp.rl.ac.uk
CE_HOST_heplnx206_pp_rl_ac_uk_CE_TYPE=cream
CE_HOST_heplnx206_pp_rl_ac_uk_CE_InfoJobManager=pbs
CE_HOST_heplnx206_pp_rl_ac_uk_QUEUES="grid"
CE_HOST_heplnx207_pp_rl_ac_uk_CE_TYPE=cream
CE_HOST_heplnx207_pp_rl_ac_uk_CE_InfoJobManager=pbs
CE_HOST_heplnx207_pp_rl_ac_uk_QUEUES="grid"
CLUSTER_HOST=heplnv146.pp.rl.ac.uk
CLUSTERS=GRID
CLUSTER_GRID_CLUSTER_UniqueID=grid.pp.rl.ac.uk
CLUSTER_GRID_CLUSTER_Name=grid.pp.rl.ac.uk
CLUSTER_GRID_SITE_UniqueID=UKI-SOUTHGRID-RALPP
CLUSTER_GRID_CE_HOSTS="heplnx206.pp.rl.ac.uk heplnx207.pp.rl.ac.uk"
CLUSTER_GRID_SUBCLUSTERS="GRID"
SUBCLUSTER_GRID_SUBCLUSTER_UniqueID=grid.pp.rl.ac.uk
SUBCLUSTER_GRID_HOST_ApplicationSoftwareRunTimeEnvironment="
LCG-2
LCG-2_1_0
LCG-2_1_1
LCG-2_2_0
LCG-2_3_0
LCG-2_3_1
LCG-2_4_0
LCG-2_5_0
LCG-2_6_0
LCG-2_7_0
GLITE-3_0_0
RALPP
SOUTHHGRID
GRIDPP
R-GMA
"
SUBCLUSTER_GRID_HOST_ArchitectureSMPSize=4
SUBCLUSTER_GRID_HOST_ArchitecturePlatformType=x86_64
SUBCLUSTER_GRID_HOST_BenchmarkSF00=0
SUBCLUSTER_GRID_HOST_BenchmarkSI00=2390
SUBCLUSTER_GRID_HOST_MainMemoryRAMSize=2000
SUBCLUSTER_GRID_HOST_MainMemoryVirtualSize=2000
SUBCLUSTER_GRID_HOST_NetworkAdapterInboundIP=FALSE
SUBCLUSTER_GRID_HOST_NetworkAdapterOutboundIP=TRUE
SUBCLUSTER_GRID_HOST_OperatingSystemName=ScientificSL
SUBCLUSTER_GRID_HOST_OperatingSystemRelease=5.4
SUBCLUSTER_GRID_HOST_OperatingSystemVersion=Boron
SUBCLUSTER_GRID_HOST_ProcessorClockSpeed=2300
SUBCLUSTER_GRID_HOST_ProcessorModel=Xeon
SUBCLUSTER_GRID_HOST_ProcessorOtherDescription='Cores=3.7656,Benchmark=9.56-HEP-SPEC06'
SUBCLUSTER_GRID_HOST_ProcessorVendor=Intel
SUBCLUSTER_GRID_SUBCLUSTER_Name=grid.pp.rl.ac.uk
SUBCLUSTER_GRID_SUBCLUSTER_PhysicalCPUs=546
SUBCLUSTER_GRID_SUBCLUSTER_LogicalCPUs=2056
SUBCLUSTER_GRID_SUBCLUSTER_WNTmpDir=/scratch

Then it was a simple case of installing the rpms and running YAIM:

yum install emi-cluster
/opt/glite/yaim/bin/yaim -c -s /opt/glite/yaim/etc/site-info.def -n glite-CLUSTER

At that point we seemed to have a working system, the BDII was running and queriable, I count connect to the gridftp server and it had set up expriment and cluster directories in /opt/edg/var/info/ and /opt/glite/var/info/.

Fine, next step was to rsync the contents of those directories from the torque server that then exports them to the CEs - well actually to /export/gridtags and /export/glitetags and symlink the previous locations to those. cfengine had already set the node up as a nfs server for me so exporting the new areas and updating the CEs to mount it from there was a matter of moments.

A quick check of the resource BDII looked fine so it was a simple matter to add the new source into the site bdii and tweak the static-file-CE.ldif file on the CreamCE to assign it to the new cluster.

One thing remained, when testing the gridftp server with uberftp* I'd noticed that I was not mapped to my usual pool account - not surprising as I had not mounted the site gridmapdir so it was using its local one. However, reasoning that the gridftp server was the same rpm as the one on the CreamCE that was using Argus for authentication and mapping I had a poke around on the CreamCE and in YAIM and tried installing the argus-gsi-pep-callout rpm and coping over /etc/grid-security/gsi-authz.conf and /etc/grid-security/gsi-pep-callout.conf from the CreamCE.

Another quick test with uberftp and yes, I am mapped to my normal pool account so it appears I have a Cluster Publisher with Argus integration working. That means the one things at the site not using Argus are the gLite CreamCE which will be replaced soon by another EMI one and dCache which will get banning from Argus when I update to the next Golden Release.

*uberftp heplnv146.pp.rl.ac.uk "ls /etc"

Friday, July 22, 2011

EMI CREAM

We have installed emi creamce at Oxford. It was quite straight forward and apparently everything was setup by yaim properly except that emi cream uses normal /etc/, /usr/ directories instead of /opt/glite. It uses just one repository for all packages, no more separate TORQUE_* repositories.
Jobs were running perfectly and all test jobs completed successfully. But it was only getting lhcbpilot jobs and after looking more closely it was the classic "GlueCEStateWaitingJobs: 444444" problem.

Drilling through many layer of wrapper it comes to this issue
/sbin/runuser -s /bin/sh ldap -c "diagnose -g --host=t2ce02.physics.ox.ac.uk"
ERROR: 'diagnose' failed
ERROR: user 'ldap' is not authorized to execute command 'diagnose'

I think this is the less documented part of emi creamce. In glite, slapd and bdii-update process was run by edguser but with emi it is run by ldap user.
Edited maui.cfg file
ADMIN3 edginfo rgma edguser ldap

It solved the problem as I was using our site wide maui.cfg file instead of default created by yaim. Just a heads-up if you are planning to install emi creamce

Friday, March 11, 2011

SAM to MyEGEE to finally MyEGI

I have updated to latest release of wlcg nagios to gridppnagios. It is a major release in the sense that it stopped configuring MyEGEE for portal and replaced it with MyEGI. MyEGEE would be there until I drop the myegee DB from gridppnagios machine but don't trust it anymore. I got two complain about MyEGEE within few hours of updating it so I can say that people are looking at it.
The other main change is that now Nagios Configuration Generator(NCG) is using Aggregated Topology Provider(ATP) instead of SAMDB to configure nagios. ATP is part of the ROC/NGI nagios package which aggregate information from GOCDB, Top BDII and VO feed etc and it is single authoritative information source with topology information. But it is the central ATP(http://grid-monitoring.cern.ch/atp) which is being used by all ROC/NGI's for topology configuration for the sake of uniformity and probably reliability . Old SAM infrastructure can now retire in peace.
So MyEGI, It is a kind of all in one (https://gridppnagios.physics.ox.ac.uk/myegi).
It has Gridmap, metric status, history and so on. Aesthetically MyEGEE was better but MyEGI has more functionality and if you are still not convince then check the comparison of SAM, MyEGEE and MyEGI here (https://tomtools.cern.ch/confluence/display/SAM/MyEGI+vs+MyEGEE+vs+SAM+Portal ).
MyEGI have very good search options and also has advanced filter so you can optimize your search and add URL to your bookmark for instance status of your site.
I just discovered two bugs and the irritating things is that it is showing advance date on history bar. So if you want to see the status at 11 March, check for 12 March !
A bug has been opened and hopefully it will be fixed soon
https://tomtools.cern.ch/jira/browse/SAM-1325
https://tomtools.cern.ch/jira/browse/SAM-1326

Monday, February 28, 2011

Going through the Argus Valley

Being an early adopter site for Argus, Oxford got one of the first MUPJ from ATLAS using glexec through Argus and it failed! although we were passing ops glexec tests for long.
Our understanding of Argus was that it must have a policy which authorize pilots to switch to a normal user, so I had a policy like this to authorize pilot for glexec

resource "http://authz-interop.org/xacml/resource/resource-type/wn" {
obligation "http://glite.org/xacml/obligation/local-environment-map" {
}

action "http://glite.org/xacml/action/execute" {
rule permit { pfqan="/ops/Role=pilot" }
rule permit { pfqan="/atlas/Role=pilot" }
rule permit { pfqan="/cms/Role=pilot" }
}
}


After discussion with Argus experts on mailing list, it turned out that when pilot framework ask glexec to switch user from pilot to the effective user, LCMAP PEP plugin send the proxy of effective user to ARGUS server for authorization and mapping. So Argus must have policy which authorize effective user also. I have changed policy to look like that

rule permit {pfqan = "/atlas/Role=pilot" }
rule permit {pfqan = "/atlas/Role=lcgadmin" }
rule permit {pfqan = "/atlas/Role=production" }
rule permit {pfqan = "/atlas/" }

It solved the problem. Doesn't it look like that every atlas user is allowed to switch identity through glexec ? As for as Argus is concerned, yes. But glexec configuration is defined at WN and only groups which are whitelisted at /opt/glite/etc/glexec.conf are allowed to use glexec, any other user trying glexec will be shot down at WN itself. By default only pilot users are whitelisted at WN.
So in nutshell, policies at Argus should resemble that of the CE.

Friday, January 07, 2011

glite-APEL Node

On Thursday 9th December we brought the new glite-APEL box on line.

The VM hosted by t2delltest, had already been installed and Kashif had installed the cert.

We ran apel on all the ce's and t2torque02 and then one last time on t2mon02.

Then reconfigured t2ce02 to point at the new apel box and ran apel on it. We saw new records created on the box. (After sorting some permissions issues, need to rerun yaim with each ce (and t2torque02) set in the site-info.def file. Each run did the magic to allow that node to write to the db. (FQDN's should be used).
We then changed the reference to t2mon02 to t2apel01 in the site-info.def file on pplxconfig and it propagated round the other nodes.
The first run that night failed due to a java out of memory error.
I tweaked the config file /opt/glite/etc/glite-apel-publisher/publisher-config-yaim.xml
to

150000
from the original 300000

All apel logfiles on all ce's , t2torque02 and t2apel01 now appear to be good.
Cristina can see records appearing at RAL.

The old mysql database from t2mon02 has been backed up in /data/sysadmin (pplxfs2)