Condor daemons
Condor daemons
Æcondor_master
It is responsible for keeping the rest of the Condor daemons running in a pool The master spawns the other daemon
runs on every machine in your Condor pool
Æcondor_startd
Any machine that wants to execute jobs needs to have this daemon running It advertises a machine ClassAd
responsible for enforcing the policy under which the jobs will be started, suspended, resumed, vacated, or killed
Æcondor_starter
Spawned by condor_startd
sets up the execution environment , create a process to run the user job, and monitors the job running
Æ
condor_schedd
Any machine that allows users to submit jobs needs to have the daemon running
Users submit jobs to the condor_schedd, where they are stored in the job queue.
condor_submit, condor_q, or condor_rm connect to the condor_schedd to view and manipulate the job queue
Æ
condor_shadow
Created by condor_schedd
runs on the machine where a job was submitted
Any system call performed on the remote execute machine is sent over the network to this daemon, and the shadow performs the
system call (such as file I/O) on the submit machine and the result is sent back over the network to the remote job
Æ
condor_collector
collecting all the information about the status of a Condor pool
All other daemons periodically updates information to the collector
These information contain all the information about the state of the daemons, the resources they represent, or resource requirements of the submitted jobs
condor_status command connects to this daemon for information
Æ
condor_negotiator
responsible for all the matchmaking within the Condor system
Interactions among Condor daemons
Condor demons in running
Resource Management System : Condor
Resource Management System : Condor
Most jobs are not independent:
Dependencies exists between jobs.
Second stage cannot start until first stage has completed.
Condor uses DAGMan - Directed Acyclic Graph
Manager
DAGMan allows you to specify dependencies between your Condor jobs, then it run the jobs automatically in the sequence satisfying the dependencies.
DAGs are the data structures used by DAGMan to represent these dependencies.
Each job is a “node” in the DAG.
Each node can have any number of “parent” or “children” nodes – as long as there are no loops.
Resource Management System : Condor
Resource Management System : Condor
A DAG is defined by an text file separate from the
Condor job description file, listing each of nodes and
their dependencies:
# diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child DDagman will exam the dag file, locate the submission
file for each job and run the jobs in the right sequence.
Job A
Job B Job C
Example Clusters
BlueGene
BlueGene
/L
/L
Source: IBM
No. 1 in Top500 list from 2005-2007
BlueGene
BlueGene
/L
/L
–
–
networking
networking
BlueGene system
employs various network
types.
Central is the torus
interconnection network:
3D torus with wrap-around.
Each node connects to six neighbours (bidirectional).
Routing achieved in hardware.
each link with 1.4 Gbit/s.
1.4 x 6 x 2= 16.8 Gbit/s aggregate bandwidth
BlueGene
BlueGene
/L
/L
Other three networks:
Binary combining tree• Used for collective/global operations - reductions, sums, products , barriers etc. • Low latency (2μS)
Gigabit Ethernet I/O network
• Support file I/O
• An I/O node is responsible for performing I/O operations for 128 processors
Diagnostic & control network
• Booting nodes, monitoring processors.
Each chip has the above four network interfaces (torus, tree,
i/o, diagnostics)
Note specialised networks are used for different purposes
-quite different from many other HPC cluster architectures.
BlueGene
BlueGene
/L
/L
Message Passing:
The BlueGene focussed a good deal of energy developing an efficient MPI implementation to reduce latency in the software stack.
Using the MPICH code-base as a start-point:
• MPI library was enhanced with respect to machine architecture. • For example, using the combining tree for reductions & broadcasts.
Reading paper:
ASCI Q
ASCI Q
The Q supercomputing system at Los Alamos National Laboratory (LANL)
Product of Advanced Simulation and Computing (ASCI) program
Used for simulation and computational modelling
ASCI Q
ASCI Q
“Classical” cluster architecture.
1024 SMPs (AlphaServer ES45s from HP) are put in one segment
• Each with four EV-68 1.25Ghz CPUs with 16-MB cache
the whole system has 3 segments
• The three segments can operate independently or as a single system • Aggregate 60 TeraFLOPS capability.
• 33 Terabytes of memory
664 TB of global storage
Interconnection using
• Quadrics switch interconnect (QSNet)
• High bandwidth (250MB/s) and Low latency (5us) network.
Earth Simulator
Earth Simulator
Built by NEC, located in the Earth Simulator Centre in Japan
Used for running global climate models to evaluate the effects of global warming No.1 from 2002-04
Earth Simulator
Earth Simulator
Æ640 nodes, each with 8 vector processors and 16GB memory
Two nodes are installed in one cabinet ÆIn total:
5120 processors (NEC SX-5)
10 TeraByte memory
700 TeraByte of disk storage and 1.6 PetaByte of Tape storage
Computing capacity: 36 TFlop/s
ÆNetworking: Crossbar interconnection (very expensive)
Bandwidth: 16GB/s between any two nodes
Latency: 5us
ÆDual level parallelism: OpenMP in-node, MPI out of node
ÆPhysical installation:
Machine resides on 3th floor; Cables on 2nd; Power generation & cooling
UK systems
UK systems
–
–
Cambridge
Cambridge
PowerEdge
PowerEdge
Æ
576 Dell PowerEdge 1950
compute servers
Computing capability: 28TFlop/s
Æ
Each server has two
Dual-Core Intel Xeon 5160
processors
3GHz and 8GB of memory
Æ
InfiniBand network
Bandwidth: 10GBit
Latency: 7us
Cluster Networks
Cluster Networks
Introduction
Communication has significant impact on application performance.
Interconnection networks therefore have a vital role in cluster systems.
As usual, the driver is performance…
An increase in compute power typically demands proportional increases in lower latency / higher bandwidth communication services.
Cluster Networks
Cluster Networks
Issues with cluster interconnections are similar to those
with normal networks:
Latency & Bandwidth
• Latency= sender overhead + switching overhead + (message size / Bandwidth) + receiver overhead.
Topology type (bus, ring, torus, hypercube etc).
Routing, switching.
Direct connections (point-to-point) or indirect connections.
NIC (Network Interface Card) capabilities.
Physical media (wiring density, reliability)
Interconnection Topologies
Interconnection Topologies
In standard LANs we have two general structures:
Shared network (bus)• As used by “classic” Ethernet networks.
• All messages are broadcast… each processor listens to every message. • Requires complex access control (e.g. CSMA/CD).
• Collisions can occur: requires back-off policies and retransmissions.
• Suitable when the offered load is low - inappropriate for high performance
applications.
• Very little reason to use this form of network today.
Switched network
• Permits point-to-point communications between sender & receiver. • Fast internal transport provides high aggregate bandwidth.
Metrics to evaluate network topology
Metrics to evaluate network topology
Useful metrics for switched network topology:
Scalability : the network’s switch scalability with nodes. Degree: number of links to / from a node.
Diameter: the shortest path between the furthest nodes.
Bisection width: the minimum number of links that must be cut in order to divide the topology into two independent networks of the same size (+/- one node). Essentially a measure of
bottleneck bandwidth - if higher, the network will perform better under load.
Interconnection Topologies
Interconnection Topologies
Crossbar switch:
Low latency and high throughput.
Switch scalability is poor - O(N2)
Interconnection Topologies
Interconnection Topologies
Linear Arrays and Rings
Consider networks with switch scaling costs better than O(N2).
In one dimension, we have simple linear arrays.
O(N) switches.
These can wrap around to make a ring or 1D torus.
good overall bandwidth but latency is high.
Interconnection Topologies
Interconnection Topologies
2D Meshes
Can wrap-around as a 2D torus.
Switch scaling: O(N)
Average degree: 4
Diameter: O(2n1/2)
Interconnection Topologies
Interconnection Topologies
Hypercubes:
K dimension, Switches N= 2K. Diameter: O(K). Good bisectional width (O(2K-1)).
Interconnection Topologies
Interconnection Topologies
Binary Tree:
Scaling:
• n = 2d processor nodes (where d = depth)
• 2d+1-1 switches
Degree: 3
Diameter: O(2d)
Interconnection Topologies
Interconnection Topologies
Fat trees:
Similar in diameter to a binary tree.
Bisection width (which equates to bottleneck) is greatly improved due to additional dimensions.
Interconnection Topologies
Interconnection Topologies
Summary of topologies:
Topology Degree Diameter Bisection
1D Array 2 N-1 1
1D Ring 2 N/2 2
2D Mesh 4 2N1/2 N1/2
2D Torus 4 N1/2 2N1/2
Hypercube n=log(N) n N/2
There are others - we saw a 3D torus in the BlueGene/L section for instance.