Difference between revisions of "HPC:Main Page"

From HPC wiki
Line 6: Line 6:
 
*[[HPC:User_Guide|User Guide]]
 
*[[HPC:User_Guide|User Guide]]
 
*[[HPC:Software|Available Software]]
 
*[[HPC:Software|Available Software]]
*[[HPC:Archive_System|PMACS Archive System]]
 
  
 
=== PMACS ERA ===
 
=== PMACS ERA ===

Revision as of 18:17, 15 July 2014

High Performance Computing at PennMedicine is supported by the Enterprise Research Applications (ERA) group within Penn Medicine Academic Computing Services (PMACS). Please send requests and report problems to Jason/Rikki/Anand.


Other Pages

PMACS ERA

The Enterprise Research Applications (ERA) group within Penn Medicine Academic Computing Services (PMACS) is a small and diverse team of individuals who are focused on providing High Performance and Research Computing Support to the faculty and staff at the University of Pennsylvania.

PMACS ERA Team:

  • Jason Hughes, Director, Enterprise Research Applications
  • Rikki Godshall, Enterprise IT Architect
  • Anand Srinivasan, Sr. IT Project Leader

About the PMACS Cluster

The PMACS HPC facility opened in April of 2013 to meet the increasing growth in genomics processing and storage, as well as growth in other scientific areas requiring computational capacity such as imaging and bio-statistics/bioinformatics. The cluster is managed by the Enterprise Research Applications (ERA) team within Penn Medicine Academic Computing Services (PMACS), and is located at the Philadelphia Technology Park, a Tier-3, SSAE 16/SAS 70 Type II Audit compliant colocation/data center facility.

The hardware of the PMACS Cluster comprises of:

  • 1 dedicated master node
  • 64 IBM iDataPlex cluster nodes which serve Compute nodes
  • Each compute node has:
    • Two eight-core Intel E5-2665 2.4Ghz Xeon Processors, with hyperthreading turned on (so 32 threads per node)
    • 256 GB of RAM each (16GB per physical core)
    • 500 GB of internal storage
  • 1 dedicated "Big-Memory" machine - 64 cores & 1.5TB RAM
  • 870 TB of total usable shared disk storage provided via an eight-node IBM Scale out Network Attached Storage (SONAS) system
  • 1.8PB Archival storage
  • All compute and storage connected via 10GigE interconnect
  • Dedicated 10GigE PennNet link from campus to the datacenter

Computational job scheduling/queuing and cluster management is orchestrated by the IBM Platform Computing suite of products.

Accounts

For Account requests please contact Jason Hughes, Rikki Godshall or Anand Srinivasan

Usage Policies

Penn Acceptable Use Policy: http://www.upenn.edu/computing/policy/aup.html

Guidelines

Don't run compute-intensive tasks on the cluster head node (consign). Use an interactive node (bsub -Is bash) instead. Please read the man page for 'bsub' or refer the User Guide