HPC:Main Page

From HPC wiki
Revision as of 20:33, 10 December 2019 by Asrini (talk | contribs) (→‎Accounts)

High Performance Computing at PennMedicine is supported by the Enterprise Research Applications (ERA) group within Penn Medicine Academic Computing Services (PMACS). Please send requests and report problems to Jim/Rikki/Anand.

PMACS ERA


The High Performance Computing (HPC) team within the Enterprise Research Applications (ERA) group in Penn Medicine Academic Computing Services (PMACS), is a small and diverse team of individuals focused on providing HPC and Research Computing support to the faculty and staff members of the University of Pennsylvania.

PMACS ERA HPC Team:

  • Jim Kaylor, Jim Kaylor, Director, Enterprise Research Applications (ERA), PMACS
  • Rikki Godshall, Manager, HPC and Cloud Services, ERA, PMACS
  • Anand Srinivasan, Sr. IT Project Leader, HPC and Cloud Services, ERA, PMACS
  • Oberon Kim, Intern

About the PMACS Cluster


The PMACS HPC facility opened in April of 2013 to meet the increasing growth in genomics processing and storage, as well as growth in other scientific areas requiring computational capacity such as imaging and bio-statistics/bioinformatics. The cluster is managed by the Enterprise Research Applications (ERA) team within Penn Medicine Academic Computing Services (PMACS), and is located at the Philadelphia Technology Park, a Tier-3, SSAE 16/SAS 70 Type II Audit compliant colocation/data center facility.

Weekly Office Hours


Have questions about the PMACS HPC? Come see us in person! We will have weekly Office Hours at the following location and time:

  • Location : Smilow Center for Translational Research (SCTR, formerly TRC)
  • Room : 10-120
  • Day/Time : Thursdays / 3-4PM Eastern Time

Hardware


The hardware of the PMACS Cluster comprises of:

  • 1 dedicated master node (physical) with 24 cores & 64GB RAM
  • 1 shadow master node (VM) with 12 cores & 24GB RAM
  • 3856 Total Physical cores across all compute nodes (up to 7648 virtual cores with hyper-threading turned on)
  • 7168 Total CUDA cores across GPU nodes
  • Over 43TB of total RAM across all compute nodes
  • 180 Compute nodes
    • 9x Dell C6420 Quad nodes (4 nodes per enclosure; 36 compute nodes total)
      • Each node within the Dell C6420 enclosure has:
        • Two 20 core Intel Xeon Gold 6148 2.40GHz CPU, with hyper-threading turned on (so 80 threads per node)
        • 256-512 GB RAM each (6.4-12.8 GB physical core, depending on the node)
        • 56 Gb/s InfiniBand connection to the GPFS file system
        • 1.6TB dedicated scratch space provided by local SSD or NVMe (depending on node)
    • 144x IBM iDataPlex nodes
      • Each IBM iDataPlex node has:
        • Two eight-core Intel E5-2665 2.4Ghz Xeon Processors, with hyperthreading turned on (so 32 threads per node)
        • 196-256 GB of RAM each (12-16GB per physical core, depending on the node)
        • 500 GB of internal storage
  • 2x Big Memory nodes
    • 1x Dell R940 node
      • 4x 12-core Intel Xeon Gold 6126 2.60GHz CPUs, 96 threads total with hyper-threading turned on
      • 1.5TB RAM
      • 56 Gb/s InfiniBand connection to the GPFS file system
      • 1.6TB dedicated scratch space provided by local NVMe
    • 1x IBM x3850 node
      • 8x 8-core Intel E7-8837 2.6 GHz CPUs (64 CPU cores total; no hyperthreading)
      • 1.5TB RAM
      • 500 GB of internal storage
  • 2x GPU nodes; each configured with
      • 2x 22-core Intel Xeon E5-2699 v4 2.20GHz CPUs (88 threads per node, with hyperthreading turned on)
      • 256GB RAM
      • 1x Nvidia Tesla P100 16GB GPU Card (3584 CUDA cores & 16GB RAM per card)
      • 56 Gb/s InfiniBand connection to the GPFS file system
  • Storage
    • 4.2 PetaBytes of total usable shared IBM GPFS disk storage provided via a six-node InfiniBand connected GPFS cluster
    • 1.2 PetaBytes Archival storage
  • All compute and storage connected via 10GigE interconnect
  • Dedicated 10GigE PennNet link from campus to the datacenter

Computational job scheduling/queuing and cluster management is orchestrated by the IBM Platform Computing suite of products.

Costs


The PennHPC baseline cost structure is fee-for-service based around the service-center model. Below are the costs associated with using the PMACS cluster as of 4/30/2014:

  • $0.035/computational/vCore slot hour
  • $0.055/GB/month for Disk usage
  • $0.015/GB/month for Archive storage
  • $95/hour for consulting services (excludes account setup)
  • No charges to maintain an account; charges are billed on an as-consumed basis only. If data is left behind by a user who longer uses the cluster, charges will be billed for disk usage until the data is deleted from the account.

Accounts


For Account requests please contact Jim Kaylor, Rikki Godshall or Anand Srinivasan

Please be sure to include the following information in your account request email:

  • User Info:
    • User's Full Name:
    • User's Email:
    • User's PennKey:
    • User's PennID:
    • User's Status: Student/Post-Doctoral Fellow
    • Lab rotation end date/Account expiration date (if applicable):
    • Does the data the user intends to transmit to/from, store, or process on the PennHPC require HIPAA, FISMA, or 21 CFR Part 11 compliance?: Yes/No
  • PI Info:
    • PI's Full Name:
    • PI's Email:
    • PI's PennKey (if exists):
  • Business Administrator (BA)/Billing info:
    • BA's Name:
    • BA's Email:
    • 26-digit Budget code to bill HPC usage to:


Note: If the user account is not requested by the BA/PI, we will follow up directly with the BA/PI for authorization.

Usage Policies


Penn Acceptable Use Policy: http://www.upenn.edu/computing/policy/aup.html

Other Pages