Funding for research that uses or expands upon cyberinfrastructure is available through a variety of federal and industry sources. Many of these websites provide a link where you can subscribe for regular email updates.
Federal Funding Sources
National Science Foundation – Office of Cyberinfrastructure
National Institutes for Health
National Endowment for the Humanities (search for Office of Digital Humanities)
Department of Energy Computational Science Graduate Fellowship (Krell Institute)
Other federal grants
Industry Funding Sources
Writing Your Grant
Template for Data Management Plans
Several U.S. funding agencies, including the National Science Foundation and the National Institutes of Health, require researchers to supply detailed, cost-effective plans for managing research data. We offer links to tools and guidelines on our Data Management Resources page.
Template for Using Flux
This boilerplate language was written for researchers to use in grant proposals where there is a request for funding to use Flux, the university’s shared HPC cluster. To request other boilerplate language not included here (e.g., network capabilities), please contact email email@example.com.
For the project description
To support the computational discovery of this grant, this project requires access to high performance computing resources. We expect that we will utilize an average of <xxx> cores over the next <xxx> months for our needs.
Advanced Research Computing (ARC) at the University of Michigan operates a shared cluster resource, Flux, and leases cores at a rate of <$xx.xx> per core per month. Our total cost for using this resource will be <$xx,xxx.>
Flux allows us to adjust the resource to fit our usage, by providing on-demand access. We consider the use of Flux an advantage over operating our own hardware for this project.
For the budget
<You should insert cost into your budget matching the costs above. This cost should be listed as a service, not as a hardware item.>
For the justification
Advanced Research Computing (ARC) at the University of Michigan operates a shared computing cluster, called Flux, for the university’s faculty, staff and students. (The details of the facility are listed in the Facilities section of the proposal.) By leasing processing cores from ARC, we are not bound to a static number of processing cores from month to month; we can vary our use of this resource to meet our research and schedule demands. We also receive access to high-quality application support offered by ARC staff.
The <$xx,xxx > that we have budgeted will give us access to <xxx> processor months at a rate of $18.00 per core per month, along with the necessary RAM, disk, InfiniBand interconnects, and parallel file system for our needs.
Other facilities information
<Feel free to use the sections below as a part of the needed instrumentation for your grant. They are more detailed descriptions of Flux and ARC.>
Flux is a cluster of nearly identical machines based on the Intel Nehalem platform interconnected with Infiniband networking. Each compute node comprises two 6-core CPUs and 48 GB of RAM (approximately 4GB of usable RAM per CPU core). The InfiniBand interconnect provides 40 GB/s bandwidth and very low latency. If technology advances between the time of this proposal and the start period of the grant, the system may be upgraded as Flux hardware and software is refreshed regularly to stay current with evolving technology.
The system also includes 350 terabytes of scratch storage using the Lustre parallel network filesystem. This storage is for the explicit purpose to allow researchers to store data on a short term basis solely to perform calculations, and not for any long term data storage or archive purposes.
For the 2011-2012 academic year, the Flux rate is $11.20 per processing core per month. For the 2012-2013 academic year, the rate will be $18.00 per processing core per month.
The rate after 2013 has not been determined, but we expect it to be approximately $22.00 per processing core per month.
College of Engineering High Performance Computing Group and CAC
The College of Engineering high performance computing (HPC) group has been providing high performance computing expertise for the College of Engineering since it was a part of the National Partnership for Advanced Computational Infrastructure (NPACI) in the late 1990s. In 2010, the HPC group was commissioned to provide expertise to the broader university community. The HPC group is guided in this mission by a steering committee of university faculty members.
The Center for Academic Computing (CAC) currently runs two clusters, Nyx and Flux, each comprising thousands of processing cores. They share a scheduler and resource manager, as well as login nodes and the same InfiniBand fabric.
The College of Engineering currently provides four staff members for the operational support of Flux, supplemented by staff from the Office of Research Cyberinfrastructure. Information Technology Services (ITS) provides business and core services support.
In addition to this, other colleges and schools provide end-user support for Flux via domain-specific knowledge. This support comes in the form of direct user support, programming support, and first-time training course work.
<Researchers and schools should feel free to add any details on their specific groups here.>
In addition, the university’s Medical School Information Systems team also provides end-user support for Flux via domain-specific knowledge. This support comes in the form of direct user support, programming support, and first-time training course work.