Since the inception of “High Throughput Computing” in 1996 the PATh partners have coined many terms and concepts to help better describe their computational methodology. Below are the terms that have been introduced and a description of each.
Maximizing the throughput of a computing resource toward a common problem.
Specialized by the OSG, dHTC involves running a HTC infrastructure across many independent, collaborating administrative domains.
A consortium dedicated to the advancement of all of Open Science via the practice of distributed High Throughput Computing, and the advancement of its state of the art.
The OSG Consortium provides a fabric of services, including a software stack, that organizations can use to build dHTC environments. These services can be used by resource providers to build dHTC environments.
HTCSS, based out of the Center for High Throughput Computing at UW-Madison, implements several technologies for creating a dHTC environment.
Users can place their workloads (such as jobs, job sets, and DAGs) at an Access Point. The AP accesses one or more resource pools to acquire resources.
The Execution Point is given jobs by an AP and executes them.
Locates daemons and allocates shares of the overall resource pool.
Provides a mechanism to integrate a local resource (such as a batch system) with the outside world.
The OSCF provides a set of services for requesting and allocating computing resources and creating dHTC environments.
An environment for any scientist or group doing open science in the US.
The OSDF is a set of federated origin and cache services that coordinate a namespace for data access.