From Sherlock Cluster
Welcome to the Sherlock wiki! Sherlock is a high performance computing (HPC) cluster administered by the Stanford Research Computing Center. It is available to all Stanford faculty for computing associate with their research. There are 127 shared servers available to all researchers, including 1.5TB RAM "bigmem" and GPU nodes. Also, there are more than 600 additional servers available to Sherlock owners , faculty who have augmented the cluster with their own purchases. Note that Sherlock is approved for computing with low risk data, not moderate or high.
- 1 System Status
- 2 Scheduled Maintenance
- 3 Sherlock 2.0 Node Orders
- 4 Getting Started
- 5 Support
- 6 System info
- 7 Tutorials/Examples
- 8 Becoming an Owner
- 9 Links
Next Maintenance: as needed to patch severe security vulnerabilities. Beyond that, monthly full-day maintenance returns in June or July 2017 with the "go live" of Sherlock V2.
See the maintenance page for more information. Per the University's Minimum Security policies, we patch Sherlock's key components as required for compliance.
Sherlock 2.0 Node Orders
The next quarterly order window for the new Sherlock 2.0 cluster will open in mid to late July, in time to get hardware bought and delivered by end of Stanford's fiscal year.
Ordering details and prices (SUNet login required) are here- https://srcc.stanford.edu/private/sherlock-qtr-order
Sherlock 2.0 info page- https://srcc.stanford.edu/private/sherlock2
Please note that this system is not HIPAA compliant and should not be used to process any PHI nor PII, nor should it be used as a platform for processing data that are considered Moderate or High risk. See https://itservices.stanford.edu/guide/riskclassifications for more information.
How to Request an Account
To request an account, the sponsoring Stanford faculty member should email email@example.com specifying the names and SUNetIDs (usernames) of his/her research team members needing an account. Sherlock is open to the Stanford community as a computing resource to support departmental or sponsored research, thus a faculty member's explicit consent is required for account requests. Note that Sherlock is not a platform for course work, class assignments or general-use training sessions.
Next steps: Set up kerberos -> Login -> Submit Jobs
Before you can login to Sherlock, you need to setup Kerberos on your laptop/desktop. Follow the steps below.
- First- Setup Kerberos
- Then- Login to the cluster
- and then- Submit a job with SLURM
- Transferring files to/from the cluster
- Running Containers on Sherlock with Singularity
- Import Docker Images into Singularity Containers
- Cloud Computing, Google Compute Engine
- Compile your code
- More about SLURM
- More about Stata - https://web.stanford.edu/group/farmshare/cgi-bin/wiki/index.php/Stata
- Data storage and filesystems information
- School of Humanities and Sciences Partition
- Statistics Department partition
- Large data transfers: Use the Data Transfer Node and Globus
- Software available on Sherlock
- GPU Computing
- Job scheduler policies and queue structure
- Python on Sherlock
- OpenMP on Sherlock
A Glossary of common Sherlock terms.
SLURM Scheduler Frequently Asked Questions
SLURM FAQ listing common problems and solutions
It is important and expected that publications resulting from computations performed on Sherlock acknowledge this. The following wording is suggested:
Some of the computing for this project was performed on the Sherlock cluster. We would like to thank Stanford University and the Stanford Research Computing Center for providing computational resources and support that contributed to these research results.
Research Computing support can be reached by sending an email to firstname.lastname@example.org and mentioning 'Sherlock'. Please include some additional details, such as Job IDs and sbatch files so we can help you better: how to ask questions
If you have a question for other users and not for the sysadmins, you can e-mail the user mailing list email@example.com
Office hours are Tuesdays 10-11am and Thursdays 3-4pm in Polya Hall, room 261 (2nd floor).
Please feel free to stop by if you have any question or trouble using Sherlock, we'll be happy to help you.
There are also more general office hours in Huang building by ICME offices, Fri 1PM-3PM (Spring 2017): http://web.stanford.edu/group/c2/index.html
The base set of resources available to all Stanford researchers include the following:
- Four load balanced login nodes; sherlock.stanford.edu is load-balanced between sherlock-ln01.stanford.edu, sherlock-ln02, sherlock-ln03 and sherlock-ln04
- 120 general compute nodes with dual socket Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (8 core/socket); 64 GB RAM (1866 MHz DDR3), 100 GB local SSD
- 2 "bigmem" nodes with quad socket Intel(R) Xeon(R) CPU E5-4640 @ 2.40GHz (8 core/socket); 1.5 TB RAM; 13 TB local storage
- 6 GPU nodes with dual socket Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz; 256 GB RAM; 200 GB local storage
- 2 nodes with 8 NVIDIA Tesla K20Xm
- 3 nodes with 8 NVIDIA GeForce GTX TITAN Black
- 2:1 oversubscribed FDR Infiniband network
- Isilon high speed NFS for /home; backed up via snapshots and replicated to alternate site. Snapshots are in ~/.snapshot
- 1 PB Lustre parallel file system for /scratch
Latest System News, Containers and New Software
- Latest System News- We now support containers and many more Deep Learning toolsets
- Linux Tutorials
- How to submit a job on Sherlock*
- SLURM Tutorials http://slurm.schedmd.com/tutorials.html
- R on Sherlock
- Running Matlab
- Running Tensor Flow on the Sherlock GPU partition
- SRCC Research Applications Portal: web applications, reproducible environments, and software to help with your research
- SRCC Tutorials and useful links
Becoming an Owner
Adding to the Cluster for Your Research Team's Prioritized Use
Three or four times a year, we have expanded Sherlock by providing faculty with the opportunity to purchase from recommended nodes and standard storage building blocks, for the use of their research teams. Using a traditional compute cluster condominium model, participating faculty and their research teams have priority use of the resources they purchase; when those resources are not in use, other "owners" can use them. When the purchasing owner wants to use his/her resources, other jobs will be killed. Participating owner PIs also have shared access to the original base Sherlock nodes, along with everyone else. Note that the minimum purchase per PI is one physical server; we cannot accommodate multiple PIs pooling funds for single nodes.
This model has been more successful that we anticipated - in less than 2 years, Stanford faculty owners' purchases have been at such volume that the original Sherlock network interconnect fabric is full! We are in the process of architecting the next environment and anticipate making new equipment purchases for general shared researchers' use as well as for owners late in the spring of 2017. We will be adding new networking, some new community nodes as well as owner nodes, new functionality and will also use a more recent version of the Linux operating system.
If you are interested in becoming an owner, please send an email to firstname.lastname@example.org .
You can find the latest information about ordering nodes on Sherlock at https://srcc.stanford.edu/private/sherlock2 (SUNet ID login required)
Other wikis on campus that are similar to this one
- Our Group- The Stanford Research Computing Center (SRCC): http://srcc.stanford.edu
- FarmShare: https://farmshare.stanford.edu
- genetics cluster: https://www.stanford.edu/group/scgpm/cgi-bin/informatics/wiki/index.php/Main_Page
- HPCC clusters: https://www.stanford.edu/group/hpcc/cgi-bin/mediawiki/index.php/Main_Page
- Proclus https://www.stanford.edu/group/proclus/cgi-bin/mediawiki/index.php/Main_Page
- TACC: C - http://www.tacc.utexas.edu/user-services/training/course-materials