Main Page

From Sherlock Cluster

Jump to: navigation, search

Welcome to the Sherlock wiki! Sherlock is a high performance computing (HPC) cluster administered by the Stanford Research Computing Center. It is available to all Stanford faculty for their research. There are 127 shared servers available to all researchers, including 1.5TB RAM "bigmem" and GPU nodes. Also, there are more than 600 additional servers available to Sherlock owners , faculty who have augmented the cluster with their own purchases.

System Status

https://sherlock-status.stanford.edu/

Scheduled Maintenance

Next Maintenance: January 17, 8am-6pm

See the maintenance page for more information.


Getting Started

Please note that this system is not HIPAA compliant and should not be used to process any PHI nor PII, nor should it be used as a platform for storing or processing data that are considered Moderate or High risk. See https://itservices.stanford.edu/guide/riskclassifications for more information.

How to Request an Account

To request an account, the sponsoring Stanford faculty member should email research-computing-support@stanford.edu specifying the names and SUNetIDs (usernames) of his/her research team members needing an account. Sherlock is open to the Stanford community as a resource to support sponsored research, thus a faculty member's explicit consent is required for account requests. Note that Sherlock is not a platform for course work, class assignments or general-use training sessions.

Next steps: Set up kerberos -> Login -> Submit Jobs

Before you can login to Sherlock, you need to setup Kerberos on your laptop/desktop. Follow the steps below.

More information: Data Storage/Transfer, Available Software, Fairshare Policy, How the scheduler works, queue structure, GPUs

Sherlock Glossary

A Glossary of common Sherlock terms.


SLURM Scheduler Frequently Asked Questions

SLURM FAQ listing common problems and solutions


Acknowledgment

Users wanting to acknowledge the use of Sherlock in publications could use the following wording:
Parts of the computing for this project was performed on the Sherlock cluster. We would like to thank Stanford University and the Stanford Research Computing Center for providing computational resources and support that have contributed to these research results.

Support

By email

Research Computing support can be reached by sending an email to research-computing-support@stanford.edu and mentioning 'Sherlock'. Please include some additional details, such as Job IDs and sbatch files so we can help you better: how to ask questions

If you have a question for other users and not for the sysadmins, you can e-mail the user mailing list sherlock-users@lists.stanford.edu

Office hours

Office hours are Tuesdays 10-11am and Thursdays 3-4pm in Polya Hall, room 261 (2nd floor).
Please feel free to stop by if you have any question or trouble using Sherlock, we'll be happy to help you.
There are also more general office hours in Huang building by ICME offices, Fri 1PM-3PM (Spring 2017): http://web.stanford.edu/group/c2/index.html

System info

The base set of resources available to all Stanford researchers include the following:

  • Four load balanced login nodes; sherlock.stanford.edu is load-balanced between sherlock-ln01.stanford.edu, sherlock-ln02, sherlock-ln03 and sherlock-ln04
  • 120 general compute nodes with dual socket Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (8 core/socket); 64 GB RAM (1866 MHz DDR3), 100 GB local SSD
  • 2 "bigmem" nodes with quad socket Intel(R) Xeon(R) CPU E5-4640 @ 2.40GHz (8 core/socket); 1.5 TB RAM; 13 TB local storage
  • 6 GPU nodes with dual socket Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz; 256 GB RAM; 200 GB local storage
    • 2 nodes with 8 NVIDIA Tesla K20Xm
    • 3 nodes with 8 NVIDIA GeForce GTX TITAN Black
  • 2:1 oversubscribed FDR Infiniband network
  • Isilon high speed NFS for /home; backed up via snapshots and replicated to alternate site. Snapshots are in ~/.snapshot
  • 1 PB Lustre parallel file system for /scratch

Latest System News, Containers and New Software

Tutorials/Examples

Becoming an Owner

Adding to the Cluster for Your Research Team's Prioritized Use

Three or four times a year, we have expanded Sherlock by providing faculty with the opportunity to purchase from recommended nodes and standard storage building blocks, for the use of their research teams. Using a traditional compute cluster condominium model, participating faculty and their research teams have priority use of the resources they purchase; when those resources are not in use, other "owners" can use them. When the purchasing owner wants to use his/her resources, other jobs will be killed. Participating owner PIs also have shared access to the original base Sherlock nodes, along with everyone else. Note that the minimum purchase per PI is one physical server; we cannot accommodate multiple PIs pooling funds for single nodes.

This model has been more successful that we anticipated - in less than 2 years, Stanford faculty owners' purchases have been at such volume that the original Sherlock network interconnect fabric is full! We are in the process of architecting the next environment and anticipate making new equipment purchases for general shared researchers' use as well as for owners late in the fall quarter 2016. We will be adding new networking, some new community nodes as well as owner nodes, new functionality and will also use a more recent version of the Linux operating system.

If you are interested in becoming an owner in the fall, please send an email to research-computing-support@stanford.edu . Given the procurement, integration and build processes, and staff resources, no new nodes can be added to Sherlock before the end of 2016/early 2017.

You can find the latest information about ordering nodes on Sherlock at https://srcc.stanford.edu/private/sherlock2 (SUNet ID login required)

Links

Other wikis on campus that are similar to this one

non-Stanford links to similar resources

Personal tools