Difference between revisions of "HPC:Main Page"

From HPC wiki
Line 38: Line 38:
 
** 500 GB of internal storage
 
** 500 GB of internal storage
 
* 1 dedicated "Big-Memory" machine - 64 cores & 1.5TB RAM  
 
* 1 dedicated "Big-Memory" machine - 64 cores & 1.5TB RAM  
* 1.6 PetaBytes of total usable shared disk storage provided via an eight-node IBM Scale out Network Attached Storage (SONAS) system
+
* 3.4 PetaBytes of total usable shared IBM GPFS disk storage provided via a six-node InfiniBand connected GPFS cluster
 +
* 1.6 PetaBytes of total usable shared disk storage system provided via an eight-node IBM Scale out Network Attached Storage (SONAS) system <-- Older filesystem
 
* 1.8 PetaBytes Archival storage
 
* 1.8 PetaBytes Archival storage
 
* All compute and storage connected via 10GigE interconnect
 
* All compute and storage connected via 10GigE interconnect

Revision as of 16:57, 23 May 2018

High Performance Computing at PennMedicine is supported by the Enterprise Research Applications (ERA) group within Penn Medicine Academic Computing Services (PMACS). Please send requests and report problems to Jim/Rikki/Anand.


Other Pages

PMACS ERA

The High Performance Computing (HPC) team within the Enterprise Research Applications (ERA) group in Penn Medicine Academic Computing Services (PMACS), is a small and diverse team of individuals focused on providing HPC and Research Computing support to the faculty and staff members of the University of Pennsylvania.

PMACS ERA HPC Team:

  • Jim Kaylor, Interim Director, Enterprise Research Applications
  • Rikki Godshall, Enterprise IT Architect
  • Anand Srinivasan, Sr. IT Project Leader

Past ERA HPC Team members:

  • Jason Hughes, Former Director, Enterprise Research Applications

Weekly Office Hours

Have questions about the PMACS HPC? Come see us in person! We will have weekly Office Hours at the following location and time:

  • Location : Smilow Center for Translational Research (SCTR, formerly TRC)
  • Room : 10-120
  • Day/Time : Thursdays / 3-4PM Eastern Time

About the PMACS Cluster

The PMACS HPC facility opened in April of 2013 to meet the increasing growth in genomics processing and storage, as well as growth in other scientific areas requiring computational capacity such as imaging and bio-statistics/bioinformatics. The cluster is managed by the Enterprise Research Applications (ERA) team within Penn Medicine Academic Computing Services (PMACS), and is located at the Philadelphia Technology Park, a Tier-3, SSAE 16/SAS 70 Type II Audit compliant colocation/data center facility.

The hardware of the PMACS Cluster comprises of:

  • 1 dedicated master node
  • 144 IBM iDataPlex cluster nodes which serve Compute nodes
  • Each compute node has:
    • Two eight-core Intel E5-2665 2.4Ghz Xeon Processors, with hyperthreading turned on (so 32 threads per node)
    • 196-256 GB of RAM each (12-16GB per physical core, depending on the node)
    • 500 GB of internal storage
  • 1 dedicated "Big-Memory" machine - 64 cores & 1.5TB RAM
  • 3.4 PetaBytes of total usable shared IBM GPFS disk storage provided via a six-node InfiniBand connected GPFS cluster
  • 1.6 PetaBytes of total usable shared disk storage system provided via an eight-node IBM Scale out Network Attached Storage (SONAS) system <-- Older filesystem
  • 1.8 PetaBytes Archival storage
  • All compute and storage connected via 10GigE interconnect
  • Dedicated 10GigE PennNet link from campus to the datacenter

Computational job scheduling/queuing and cluster management is orchestrated by the IBM Platform Computing suite of products.

Costs

The PennHPC baseline cost structure is fee-for-service based around the service-center model. Below are the costs associated with using the PMACS cluster as of 4/30/2014:

  • $0.035/computational/vCore slot hour
  • $0.055/GB/month for Disk usage
  • $0.015/GB/month for Archive storage
  • $95/hour for consulting services (excludes account setup)
  • No charges to maintain an account; charges are billed on an as-consumed basis only. If data is left behind by a user who longer uses the cluster, charges will be billed for disk usage until the data is deleted from the account.

Accounts

For Account requests please contact Jim Kaylor, Rikki Godshall or Anand Srinivasan

Please be sure to include the following information in your account request email:

  • User Info:
    • User's Full Name:
    • User's Email:
    • User's PennKey:
    • User's PennID:
    • User's Status: Student/Post-Doctoral Fellow
    • Lab rotation end date/Account expiration date (if applicable):
    • Does the data the user intends to transmit to/from, store, or process on the PennHPC require HIPAA, FISMA, or 21 CFP Part 11 compliance?: Yes/No
  • PI Info:
    • PI's Full Name:
    • PI's Email:
    • PI's PennKey (if exists):
  • Business Administrator (BA)/Billing info:
    • BA's Name:
    • BA's Email:
    • 26-digit Budget code to bill HPC usage to:


Note: If the user account is not requested by the BA/PI, we will follow up directly with the BA/PI for authorization.

Usage Policies

Penn Acceptable Use Policy: http://www.upenn.edu/computing/policy/aup.html

Guidelines

Don't run compute-intensive tasks on the cluster head node (consign). Use an interactive node (bsub -Is bash) instead. Please read the man page for 'bsub' or refer the User Guide