Platform types¶
The Pydantic models in dyff.schema.platform
define the data schema for entities stored in the Dyff system. You will typically encounter these types in the responses from Dyff API functions.
Core platform types¶
The core platform types are the ones that you create, manage, and query through
the Dyff API. The core types describe the steps of the auditing workflow that
produces audit reports from models and data. Instances of core types all have a
unique .id
, belong to an .account
, and have additional metadata fields
that are updated by the platform. In particular, the .status
and .reason
fields tell you how the work is proceeding and whether it is complete or
encountered an error.
- pydantic model dyff.schema.platform.Dataset¶
Bases:
DyffEntity
,DatasetBase
An “ingested” data set in our standardized PyArrow format.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the dataset
- Constraints:
minItems = 1
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field id: str [Required]¶
Unique identifier of the entity
- field kind: Literal['Dataset'] = 'Dataset'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field name: str [Required]¶
The name of the Dataset
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the dataset
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Evaluation¶
Bases:
DyffEntity
,EvaluationBase
A description of how to run an InferenceService on a Dataset to obtain a set of evaluation results.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field dataset: str [Required]¶
The Dataset to evaluate on.
- field id: str [Required]¶
Unique identifier of the entity
- field inferenceSession: InferenceSessionSpec [Required]¶
Specification of the InferenceSession that will perform inference for the evaluation.
- field inferenceSessionReference: str | None = None¶
ID of a running inference session that will be used for the evaluation instead of starting a new one.
- field kind: Literal['Evaluation'] = 'Evaluation'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field replications: int = 1¶
Number of replications to run.
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- field workersPerReplica: int | None = None¶
Number of data workers per inference service replica.
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.InferenceService¶
Bases:
DyffEntity
,InferenceServiceSpec
An InferenceService is an inference model packaged as a Web service.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field builder: InferenceServiceBuilder | None = None¶
Configuration of the Builder used to build the service.
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field id: str [Required]¶
Unique identifier of the entity
- field interface: InferenceInterface [Required]¶
How to move data in and out of the service.
- field kind: Literal['InferenceService'] = 'InferenceService'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field model: ForeignModel | None = None¶
The Model backing this InferenceService, if applicable.
- field name: str [Required]¶
The name of the service.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field runner: InferenceServiceRunner | None = None¶
Configuration of the Runner used to run the service.
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.InferenceSession¶
Bases:
DyffEntity
,InferenceSessionSpec
An InferenceSession is a deployment of an InferenceService that exposes an API for interactive queries.
- field accelerator: Accelerator | None = None¶
Accelerator hardware to use.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field expires: datetime | None = None¶
Expiration time for the session. Use of this field is recommended to avoid accidental compute costs.
- field id: str [Required]¶
Unique identifier of the entity
- field inferenceService: ForeignInferenceService [Required]¶
InferenceService ID
- field kind: Literal['InferenceSession'] = 'InferenceSession'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field replicas: int = 1¶
Number of model replicas
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- field useSpotPods: bool = True¶
Use ‘spot pods’ for cheaper computation
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Measurement¶
Bases:
DyffEntity
,MeasurementSpec
,Analysis
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field arguments: list[AnalysisArgument] [Optional]¶
Arguments to pass to the Method implementation.
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field data: list[AnalysisData] [Optional]¶
Additional data to supply to the analysis.
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field id: str [Required]¶
Unique identifier of the entity
- field inputs: list[AnalysisInput] [Optional]¶
Mapping of keywords to data entities.
- field kind: Literal['Measurement'] = 'Measurement'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field level: MeasurementLevel [Required]¶
Measurement level
- field method: ForeignMethod [Required]¶
The analysis Method to run.
- field name: str [Required]¶
Descriptive name of the Measurement.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the measurement data. Instance-level measurements must include an _index_ field.
- field scope: AnalysisScope [Optional]¶
The specific entities to which the analysis results apply. At a minimum, the field corresponding to method.scope must be set.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Method¶
Bases:
DyffEntity
,MethodBase
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field id: str [Required]¶
Unique identifier of the entity
- field implementation: MethodImplementation [Required]¶
How the Method is implemented.
- field inputs: list[MethodInput] [Optional]¶
Input data entities consumed by the Method. Available at ctx.inputs(keyword)
- field kind: Literal['Method'] = 'Method'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field modules: list[str] [Optional]¶
Modules to load into the analysis environment
- field name: str [Required]¶
Descriptive name of the Method.
- field output: MethodOutput [Required]¶
Specification of the Method output.
- field parameters: list[MethodParameter] [Optional]¶
Configuration parameters accepted by the Method. Values are available at ctx.args(keyword)
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field scope: MethodScope [Required]¶
The scope of the Method. The Method produces outputs that are specific to one entity of the type specified in the .scope field.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Model¶
Bases:
DyffEntity
,ModelSpec
A Model is the “raw” form of an inference model, from which one or more InferenceServices may be built.
- field accelerators: list[Accelerator] | None = None¶
Accelerator hardware that is compatible with the model.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field artifact: ModelArtifact [Required]¶
How the model data is represented
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field id: str [Required]¶
Unique identifier of the entity
- field kind: Literal['Model'] = 'Model'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field name: str [Required]¶
The name of the Model.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field resources: ModelResources [Required]¶
Resource requirements of the model.
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field source: ModelSource [Required]¶
Source from which the model artifact was obtained
- field status: str = None¶
Top-level resource status (assigned by system)
- field storage: ModelStorage [Required]¶
How the model data is stored
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Module¶
Bases:
DyffEntity
,ModuleBase
An extension module that can be loaded into Report workflows.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the Module implementation
- Constraints:
minItems = 1
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field id: str [Required]¶
Unique identifier of the entity
- field kind: Literal['Module'] = 'Module'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field name: str [Required]¶
The name of the Module
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Report¶
Bases:
DyffEntity
,ReportBase
A Report transforms raw model outputs into some useful statistics.
Deprecated since version 0.8.0: Report functionality has been refactored into the Method/Measurement/Analysis apparatus. Creation of new Reports is disabled.
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field dataset: str [Required]¶
The input dataset.
- field datasetView: DataView | None = None¶
View of the input dataset required by the report (e.g., ground-truth labels).
- field evaluation: str [Required]¶
The evaluation (and corresponding output data) to run the report on.
- field evaluationView: DataView | None = None¶
View of the evaluation output data required by the report.
- field id: str [Required]¶
Unique identifier of the entity
- field inferenceService: str [Required]¶
The inference service used in the evaluation
- field kind: Literal['Report'] = 'Report'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field model: str | None = None¶
The model backing the inference service, if applicable
- field modules: list[str] [Optional]¶
Additional modules to load into the report environment
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field rubric: str [Required]¶
The scoring rubric to apply (e.g., ‘classification.TopKAccuracy’).
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.SafetyCase¶
Bases:
DyffEntity
,SafetyCaseSpec
,Analysis
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field arguments: list[AnalysisArgument] [Optional]¶
Arguments to pass to the Method implementation.
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field data: list[AnalysisData] [Optional]¶
Additional data to supply to the analysis.
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field id: str [Required]¶
Unique identifier of the entity
- field inputs: list[AnalysisInput] [Optional]¶
Mapping of keywords to data entities.
- field kind: Literal['SafetyCase'] = 'SafetyCase'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field method: ForeignMethod [Required]¶
The analysis Method to run.
- field name: str [Required]¶
Descriptive name of the SafetyCase.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field scope: AnalysisScope [Optional]¶
The specific entities to which the analysis results apply. At a minimum, the field corresponding to method.scope must be set.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Score¶
Bases:
ScoreData
A Score is a numeric quantity describing an aspect of system performance.
Conceptually, a Score is an “instance” of a ScoreSpec.
- field analysis: str [Required]¶
The Analysis that generated the current score instance.
- field format: str = '{quantity:.1f}'¶
A Python ‘format’ string describing how to render the score as a string. You must use the keyword ‘quantity’ in the format string, and you may use ‘unit’ as well (e.g., ‘{quantity:.2f} {unit}’). It is strongly recommended that you limit the output precision appropriately; use ‘:.0f’ for integer-valued scores.
- Constraints:
pattern = ^(.*[^{])?[{]quantity(:[^}]*)?[}]([^}].*)?$
- field id: str [Required]¶
Unique identifier of the entity
- field kind: Literal['Score'] = 'Score'¶
- field maximum: float | None = None¶
The maximum possible value, if known.
- field metadata: ScoreMetadata [Required]¶
Metadata about the score; used for indexing.
- field minimum: float | None = None¶
The minimum possible value, if known.
- field name: str [Required]¶
The name of the score. Used as a key for retrieving score data. Must be unique within the Method context.
- Constraints:
maxLength = 127
pattern = ^[a-zA-Z_][a-zA-Z0-9_]*$
- field priority: Literal['primary', 'secondary'] = 'primary'¶
The ‘primary’ score will be displayed in any UI widgets that expect a single score. There must be exactly 1 primary score.
- field quantity: float [Required]¶
The numeric quantity associated with the score.
- field quantityString: str [Required]¶
The formatted string representation of .quantity, after processing with the .format specification.
- field summary: str [Required]¶
A short text description of what the score measures.
- Constraints:
maxLength = 280
- field text: str [Required]¶
A short text description of what the quantity means.
- Constraints:
maxLength = 280
- field title: str [Required]¶
The title text to use when displaying score information.
- Constraints:
maxLength = 140
- field unit: str | None = None¶
The unit of measure, if applicable (e.g., ‘meters’, ‘kJ/g’). Use standard SI abbreviations where possible for better indexing.
- field valence: Literal['positive', 'negative', 'neutral'] = 'neutral'¶
A score has ‘positive’ valence if ‘more is better’, ‘negative’ valence if ‘less is better’, and ‘neutral’ valence if ‘better’ is not meaningful for this score.
- classmethod format_quantity(format: str, quantity: float, *, unit: str | None = None) str ¶
- quantity_string(quantity: float) str ¶
Formats the given quantity as a string, according to the formatting information stored in this ScoreSpec.
- pydantic model dyff.schema.platform.UseCase¶
Bases:
Concern
- field account: str [Required]¶
Account that owns the entity
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field documentation: DocumentationBase [Optional]¶
Documentation of the resource. The content is used to populate various views in the web UI.
- field id: str [Required]¶
Unique identifier of the entity
- field kind: Literal['UseCase'] = 'UseCase'¶
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field schemaVersion: Literal['0.1'] = '0.1'¶
The schema version.
- field status: str = None¶
Top-level resource status (assigned by system)
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- label_key() str ¶
- label_value() str ¶
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
Additional Platform Types¶
- pydantic model dyff.schema.platform.Accelerator¶
Bases:
DyffSchemaBaseModel
- field gpu: AcceleratorGPU | None = None¶
GPU accelerator options
- field kind: str [Required]¶
The kind of accelerator; available kinds are {{GPU}}
- pydantic model dyff.schema.platform.AcceleratorGPU¶
Bases:
DyffSchemaBaseModel
- field count: int = 1¶
Number of GPUs required.
- field hardwareTypes: list[str] [Required]¶
Acceptable GPU hardware types.
- Constraints:
minItems = 1
- field memory: ConstrainedStrValue | None = None¶
[DEPRECATED] Amount of GPU memory required, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- pydantic model dyff.schema.platform.AccessGrant¶
Bases:
DyffSchemaBaseModel
Grants access to call particular functions on particular instances of particular resource types.
Access grants are additive; the subject of a set of grants has permission to do something if any part of any of those grants gives the subject that permission.
- field accounts: list[str] [Optional]¶
The access grant applies to all resources owned by the listed accounts
- field entities: list[str] [Optional]¶
The access grant applies to all resources with IDs listed in ‘entities’
- field functions: list[APIFunctions] [Required]¶
List of functions on those resources to which the grant applies
- pydantic model dyff.schema.platform.Account¶
Bases:
DyffSchemaBaseModel
An Account in the system.
All entities are owned by an Account.
- field creationTime: datetime | None = None¶
- field id: str | None = None¶
- field name: str [Required]¶
- pydantic model dyff.schema.platform.Analysis¶
Bases:
AnalysisBase
- field data: list[AnalysisData] [Optional]¶
Additional data to supply to the analysis.
- field method: ForeignMethod [Required]¶
The analysis Method to run.
- pydantic model dyff.schema.platform.AnalysisArgument¶
Bases:
DyffSchemaBaseModel
- field keyword: str [Required]¶
The ‘keyword’ of the corresponding ModelParameter.
- field value: str [Required]¶
The value of of the argument. Always a string; implementations are responsible for parsing.
- pydantic model dyff.schema.platform.AnalysisBase¶
Bases:
DyffSchemaBaseModel
- field arguments: list[AnalysisArgument] [Optional]¶
Arguments to pass to the Method implementation.
- field inputs: list[AnalysisInput] [Optional]¶
Mapping of keywords to data entities.
- field scope: AnalysisScope [Optional]¶
The specific entities to which the analysis results apply. At a minimum, the field corresponding to method.scope must be set.
- pydantic model dyff.schema.platform.AnalysisInput¶
Bases:
DyffSchemaBaseModel
- field entity: str [Required]¶
The ID of the entity whose data should be made available as ‘keyword’.
- field keyword: str [Required]¶
The ‘keyword’ specified for this input in the MethodSpec.
- pydantic model dyff.schema.platform.AnalysisOutputQueryFields¶
Bases:
DyffSchemaBaseModel
- field analysis: str = None¶
ID of the Analysis that produced the output.
- field inputs: list[str] = None¶
IDs of resources that were inputs to the Analysis.
- field method: QueryableDyffEntity [Required]¶
Identifying information about the Method that was run to produce the output.
- pydantic model dyff.schema.platform.AnalysisScope¶
Bases:
DyffSchemaBaseModel
The specific entities to which the analysis applies.
When applying an InferenceService-scoped Method, at least
.inferenceService
must be set.When applying an Evaluation-scoped Method, at least
.evaluation
,.inferenceService
, and.dataset
must be set.- field dataset: str | None = None¶
The Dataset to which the analysis applies.
- field evaluation: str | None = None¶
The Evaluation to which the analysis applies.
- field inferenceService: str | None = None¶
The InferenceService to which the analysis applies.
- field model: str | None = None¶
The Model to which the analysis applies.
- pydantic model dyff.schema.platform.Annotation¶
Bases:
DyffSchemaBaseModel
- field key: str [Required]¶
The annotation key. A DNS label with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Names prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
See https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations for detailed naming rules.
- Constraints:
maxLength = 253
pattern = ^([a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?(.[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?)*/)?[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?$
- field value: str [Required]¶
The annotation value. An arbitrary string.
- class dyff.schema.platform.APIFunctions(value)¶
Bases:
str
,Enum
Categories of API operations to which access can be granted.
- all = '*'¶
- consume = 'consume'¶
Use the resource as a dependency in another workflow.
Example: running an
Evaluation
on aDataset
requiresconsume
permission for theDataset
.
- create = 'create'¶
Create a new instance of the resource.
For resources that require uploading artifacts (such as
Dataset
), also grants access to theupload
andfinalize
endpoints.
- data = 'data'¶
Download the raw data associated with the resource.
- delete = 'delete'¶
Set the resource status to
Deleted
.
- download = 'download'¶
Deprecated since version 0.5.0: This functionality has been consolidated into
data
.
- edit = 'edit'¶
Edit properties of existing resources.
- get = 'get'¶
Retrieve a single resource instance by ID.
- query = 'query'¶
Query the resource collection.
- strata = 'strata'¶
Deprecated since version 0.5.0: Similar functionality will be added in the future but with a different interface.
- terminate = 'terminate'¶
Set the resource status to
Terminated
.
- upload = 'upload'¶
Deprecated since version 0.5.0: This functionality has been consolidated into
create
.
- pydantic model dyff.schema.platform.APIKey¶
Bases:
Role
A description of a Role (a set of permissions) granted to a single subject (either an account or a workload).
Dyff API clients authenticate with a token that contains a cryptographically signed APIKey.
- field created: datetime [Required]¶
When the APIKey was created. Maps to JWT ‘iat’ claim.
- field expires: datetime [Required]¶
When the APIKey expires. Maps to JWT ‘exp’ claim.
- field id: str [Required]¶
Unique ID of the resource. Maps to JWT ‘jti’ claim.
- field secret: str | None = None¶
For account keys: a secret value to check when verifying the APIKey
- field subject: str [Required]¶
Subject of access grants (‘<kind>/<id>’). Maps to JWT ‘sub’ claim.
- pydantic model dyff.schema.platform.ArchiveFormat¶
Bases:
DyffSchemaBaseModel
Specification of the archives that comprise a DataSource.
- field format: str [Required]¶
- field name: str [Required]¶
- pydantic model dyff.schema.platform.Artifact¶
Bases:
DyffSchemaBaseModel
- field kind: str | None = None¶
The kind of artifact
- field path: str [Required]¶
The relative path of the artifact within the tree
- pydantic model dyff.schema.platform.ArtifactURL¶
Bases:
DyffSchemaBaseModel
- field signedURL: StorageSignedURL [Required]¶
- pydantic model dyff.schema.platform.Audit¶
Bases:
DyffEntity
An instance of applying an AuditProcedure to an InferenceService.
- field auditProcedure: str [Required]¶
The AuditProcedure to run.
- field inferenceService: str [Required]¶
The InferenceService to audit.
- field kind: Literal['Audit'] = 'Audit'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.AuditProcedure¶
Bases:
DyffEntity
An audit procedure that can be run against a set of evaluation reports.
- field kind: Literal['AuditProcedure'] = 'AuditProcedure'¶
- field name: str [Required]¶
- field requirements: list[AuditRequirement] [Optional]¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.AuditRequirement¶
Bases:
DyffSchemaBaseModel
An evaluation report that must exist in order to apply an AuditProcedure.
- field dataset: str [Required]¶
- field rubric: str [Required]¶
- pydantic model dyff.schema.platform.Concern¶
Bases:
ConcernBase
,DyffEntity
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- label_key() str ¶
- label_value() str ¶
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.ConcernBase¶
Bases:
Documented
- pydantic model dyff.schema.platform.ContainerImageSource¶
Bases:
DyffSchemaBaseModel
- field digest: str [Required]¶
The digest of the image. The image is always pulled by digest, even if ‘tag’ is specified.
- Constraints:
pattern = ^sha256:[0-9a-f]{64}$
- field host: str [Required]¶
The host of the container image registry.
- field name: str [Required]¶
The name of the image
- Constraints:
pattern = ^[a-z0-9]+((.|_|__|-+)[a-z0-9]+)*(/[a-z0-9]+((.|_|__|-+)[a-z0-9]+)*)*$
- field tag: ConstrainedStrValue | None = None¶
The tag of the image. Although the image is always pulled by digest, including the tag is strongly recommended as it is often the main source of versioning information.
- Constraints:
maxLength = 317
pattern = ^[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}$
- url() str ¶
- validator validate_host » host¶
- pydantic model dyff.schema.platform.DataSchema¶
Bases:
DyffSchemaBaseModel
- field arrowSchema: str [Required]¶
The schema in Arrow format, encoded with dyff.schema.arrow.encode_schema(). This is required, but can be populated from a DyffDataSchema.
- field dyffSchema: DyffDataSchema | None = None¶
The schema in DyffDataSchema format
- field jsonSchema: dict[str, Any] | None = None¶
The schema in JSON Schema format
- static from_model(model: Type[DyffSchemaBaseModel]) DataSchema ¶
- static make_input_schema(schema: Schema | Type[DyffSchemaBaseModel] | DyffDataSchema) DataSchema ¶
Construct a complete
DataSchema
for inference inputs.This function will add required special fields for input data and then convert the augmented schema as necessary to populate at least the required
arrowSchema
field in the resultingDataSchema
.
- static make_output_schema(schema: Schema | Type[DyffSchemaBaseModel] | DyffDataSchema) DataSchema ¶
Construct a complete
DataSchema
for inference outputs.This function will add required special fields for input data and then convert the augmented schema as necessary to populate at least the required
arrowSchema
field in the resultingDataSchema
.
- pydantic model dyff.schema.platform.DataSource¶
Bases:
DyffEntity
A source of raw data from which a Dataset can be built.
- field kind: Literal['DataSource'] = 'DataSource'¶
- field name: str [Required]¶
- field source: str | None = None¶
- field sourceKind: str [Required]¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- platform.DataSources = ('huggingface', 'upload', 'zenodo')¶
- pydantic model dyff.schema.platform.DataView¶
Bases:
DyffSchemaBaseModel
- field adapterPipeline: list[SchemaAdapter] | None = None¶
Adapter pipeline to apply to produce the view
- field id: str [Required]¶
Unique ID of the DataView
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the output of this view
- field viewOf: str [Required]¶
ID of the resource that this is a view of
- pydantic model dyff.schema.platform.DatasetBase¶
Bases:
DyffSchemaBaseModel
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the dataset
- Constraints:
minItems = 1
- field name: str [Required]¶
The name of the Dataset
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the dataset
- pydantic model dyff.schema.platform.DatasetFilter¶
Bases:
DyffSchemaBaseModel
A rule for restrcting which instances in a Dataset are returned.
- field field: str [Required]¶
- field relation: str [Required]¶
- field value: str [Required]¶
- pydantic model dyff.schema.platform.Digest¶
Bases:
DyffSchemaBaseModel
- field md5: str | None = None¶
md5 digest of artifact data
- pydantic model dyff.schema.platform.Documentation¶
Bases:
SchemaVersion
,DocumentationBase
- field entity: str | None = None¶
The ID of the documented entity. This is Optional for backward compatibility but it will always be populated in API responses.
- pydantic model dyff.schema.platform.DocumentationBase¶
Bases:
DyffSchemaBaseModel
- field fullPage: str | None = None¶
Long-form documentation. Interpreted as Markdown. There are no length constraints, but be reasonable.
- field summary: str | None = None¶
A brief summary, suitable for display in small UI elements. Interpreted as Markdown. Excessively long summaries may be truncated in the UI, especially on small displays.
- field title: str | None = None¶
A short plain string suitable as a title or “headline”.
- pydantic model dyff.schema.platform.Documented¶
Bases:
DyffSchemaBaseModel
- field documentation: DocumentationBase [Optional]¶
Documentation of the resource. The content is used to populate various views in the web UI.
- pydantic model dyff.schema.platform.DyffDataSchema¶
Bases:
DyffSchemaBaseModel
- field components: list[str] [Required]¶
A list of named dyff data schemas. The final schema is the composition of these component schemas.
- Constraints:
minItems = 1
- field schemaVersion: Literal['0.1'] = '0.1'¶
The dyff schema version
- model_type() Type[DyffSchemaBaseModel] ¶
The composite model type.
- pydantic model dyff.schema.platform.DyffEntity¶
Bases:
Status
,Labeled
,SchemaVersion
,DyffModelWithID
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field kind: Literal['Analysis', 'Audit', 'AuditProcedure', 'DataSource', 'Dataset', 'Evaluation', 'Family', 'Hazard', 'History', 'InferenceService', 'InferenceSession', 'Measurement', 'Method', 'Model', 'Module', 'Report', 'Revision', 'SafetyCase', 'UseCase'] [Required]¶
- field lastTransitionTime: datetime | None = None¶
Time of last (status, reason) change.
- abstract dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- abstract resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.DyffModelWithID¶
Bases:
DyffSchemaBaseModel
- field account: str [Required]¶
Account that owns the entity
- field id: str [Required]¶
Unique identifier of the entity
- pydantic model dyff.schema.platform.DyffSchemaBaseModel¶
Bases:
DyffBaseModel
This should be the base class for almost all non-request models in the Dyff schema. Models that do not inherit from this class must still inherit from DyffBaseModel.
Adds a root validator to ensure that all datetime fields are represented in the UTC timezone. This is necessary to avoid errors when comparing “naive” and “aware” datetimes. Using the UTC timezone everywhere ensures that JSON representations of datetimes are well-ordered.
- class dyff.schema.platform.Entities(value)¶
Bases:
str
,Enum
The kinds of entities in the dyff system.
- Account = 'Account'¶
- Analysis = 'Analysis'¶
- Audit = 'Audit'¶
- AuditProcedure = 'AuditProcedure'¶
- Concern = 'Concern'¶
- DataSource = 'DataSource'¶
- Dataset = 'Dataset'¶
- Documentation = 'Documentation'¶
- Evaluation = 'Evaluation'¶
- Family = 'Family'¶
- Hazard = 'Hazard'¶
- History = 'History'¶
- InferenceService = 'InferenceService'¶
- InferenceSession = 'InferenceSession'¶
- Measurement = 'Measurement'¶
- Method = 'Method'¶
- Model = 'Model'¶
- Module = 'Module'¶
- Report = 'Report'¶
- Revision = 'Revision'¶
- SafetyCase = 'SafetyCase'¶
- Score = 'Score'¶
- UseCase = 'UseCase'¶
- platform.EntityID = <class 'dyff.schema.v0.r1.platform.ConstrainedStrValue'>¶
- pydantic model dyff.schema.platform.EvaluationBase¶
Bases:
DyffSchemaBaseModel
- field dataset: str [Required]¶
The Dataset to evaluate on.
- field replications: int = 1¶
Number of replications to run.
- field workersPerReplica: int | None = None¶
Number of data workers per inference service replica.
- pydantic model dyff.schema.platform.ExtractorStep¶
Bases:
DyffSchemaBaseModel
Description of a step in the process of turning a hierarchical DataSource into a Dataset.
- field action: str [Required]¶
- field name: str | None = None¶
- field type: str | None = None¶
- pydantic model dyff.schema.platform.Family¶
Bases:
DyffEntity
,FamilyBase
,FamilyMembers
- field documentation: DocumentationBase [Optional]¶
Documentation of the resource family. The content is used to populate various views in the web UI.
- field kind: Literal['Family'] = 'Family'¶
- pydantic model dyff.schema.platform.FamilyBase¶
Bases:
DyffSchemaBaseModel
- field memberKind: FamilyMemberKind [Required]¶
The kind of resource that comprises the family.
- pydantic model dyff.schema.platform.FamilyMember¶
Bases:
FamilyMemberBase
- field creationTime: datetime = None¶
Tag creation time (assigned by system)
- field family: str [Required]¶
Identifier of the Family containing this tag.
- pydantic model dyff.schema.platform.FamilyMemberBase¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
A short description of the member. Interpreted as Markdown. This should include information about how this version of the resource is different from other versions.
- field name: ConstrainedStrValue [Required]¶
An interpretable identifier for the member that is unique in the context of the corresponding Family.
- Constraints:
maxLength = 317
pattern = ^[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}$
- field resource: str [Required]¶
ID of the resource this member references.
- class dyff.schema.platform.FamilyMemberKind(value)¶
Bases:
str
,Enum
The kinds of entities that can be members of a Family.
These are resources for which it makes sense to have different versions or variants that evolve over time.
- Dataset = 'Dataset'¶
- InferenceService = 'InferenceService'¶
- Method = 'Method'¶
- Model = 'Model'¶
- Module = 'Module'¶
- pydantic model dyff.schema.platform.FamilyMembers¶
Bases:
DyffSchemaBaseModel
- field members: dict[ConstrainedStrValue, FamilyMember] [Optional]¶
Mapping of names to IDs of member resources.
- pydantic model dyff.schema.platform.ForeignInferenceService¶
Bases:
DyffModelWithID
,InferenceServiceSpec
- pydantic model dyff.schema.platform.ForeignMethod¶
Bases:
DyffModelWithID
,MethodBase
- pydantic model dyff.schema.platform.ForeignModel¶
Bases:
DyffModelWithID
,ModelBase
- class dyff.schema.platform.Frameworks(value)¶
Bases:
str
,Enum
An enumeration.
- transformers = 'transformers'¶
- pydantic model dyff.schema.platform.Hazard¶
Bases:
Concern
- field kind: Literal['Hazard'] = 'Hazard'¶
- pydantic model dyff.schema.platform.History¶
Bases:
DyffEntity
- field kind: Literal['History'] = 'History'¶
- field latest: str [Required]¶
The ID of the latest Revision
- field revisions: dict[str, RevisionMetadata] [Required]¶
The set of known Revisions
- pydantic model dyff.schema.platform.Identity¶
Bases:
DyffSchemaBaseModel
The identity of an Account according to one or more external identity providers.
- field google: str | None = None¶
- pydantic model dyff.schema.platform.InferenceInterface¶
Bases:
DyffSchemaBaseModel
- field endpoint: str [Required]¶
HTTP endpoint for inference.
- field inputPipeline: list[SchemaAdapter] | None = None¶
Input adapter pipeline.
- field outputPipeline: list[SchemaAdapter] | None = None¶
Output adapter pipeline.
- field outputSchema: DataSchema [Required]¶
Schema of the inference outputs.
- pydantic model dyff.schema.platform.InferenceServiceBase¶
Bases:
DyffSchemaBaseModel
- field builder: InferenceServiceBuilder | None = None¶
Configuration of the Builder used to build the service.
- field interface: InferenceInterface [Required]¶
How to move data in and out of the service.
- field name: str [Required]¶
The name of the service.
- field runner: InferenceServiceRunner | None = None¶
Configuration of the Runner used to run the service.
- pydantic model dyff.schema.platform.InferenceServiceBuilder¶
Bases:
DyffSchemaBaseModel
- field args: list[str] | None = None¶
- field kind: str [Required]¶
- pydantic model dyff.schema.platform.InferenceServiceRunner¶
Bases:
DyffSchemaBaseModel
- field accelerator: Accelerator | None = None¶
Optional accelerator hardware to use
- field args: list[str] | None = None¶
Command line arguments to forward to the runner
- field image: ContainerImageSource | None = None¶
The container image that implements the runner. This field is optional for schema backwards-compatibility, but creating new services with image=None will result in an error.
- field kind: InferenceServiceRunnerKind [Required]¶
- field resources: ModelResources [Required]¶
Resource requirements to run the service.
- class dyff.schema.platform.InferenceServiceRunnerKind(value)¶
Bases:
str
,Enum
An enumeration.
- BENTOML_SERVICE_OPENLLM = 'bentoml_service_openllm'¶
- HUGGINGFACE = 'huggingface'¶
- MOCK = 'mock'¶
- STANDALONE = 'standalone'¶
- VLLM = 'vllm'¶
- class dyff.schema.platform.InferenceServiceSources(value)¶
Bases:
str
,Enum
An enumeration.
- build = 'build'¶
- upload = 'upload'¶
- pydantic model dyff.schema.platform.InferenceServiceSpec¶
Bases:
InferenceServiceBase
- field model: ForeignModel | None = None¶
The Model backing this InferenceService, if applicable.
- pydantic model dyff.schema.platform.InferenceSessionAndToken¶
Bases:
DyffSchemaBaseModel
- field inferencesession: InferenceSession [Required]¶
- field token: str [Required]¶
- pydantic model dyff.schema.platform.InferenceSessionBase¶
Bases:
DyffSchemaBaseModel
- field accelerator: Accelerator | None = None¶
Accelerator hardware to use.
- field expires: datetime | None = None¶
Expiration time for the session. Use of this field is recommended to avoid accidental compute costs.
- field replicas: int = 1¶
Number of model replicas
- field useSpotPods: bool = True¶
Use ‘spot pods’ for cheaper computation
- pydantic model dyff.schema.platform.InferenceSessionReference¶
Bases:
DyffSchemaBaseModel
- field interface: InferenceInterface [Required]¶
How to move data in and out of the service.
- field session: str [Required]¶
The ID of a running inference session.
- pydantic model dyff.schema.platform.InferenceSessionSpec¶
Bases:
InferenceSessionBase
- field inferenceService: ForeignInferenceService [Required]¶
InferenceService ID
- pydantic model dyff.schema.platform.Label¶
Bases:
DyffSchemaBaseModel
A key-value label for a resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field key: ConstrainedStrValue [Required]¶
The label key is a DNS label with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
- Constraints:
maxLength = 317
pattern = ^([a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?(.[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?)*/)?[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?$
- field value: ConstrainedStrValue | None = None¶
The label value consists of alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
- Constraints:
maxLength = 63
pattern = ^([a-z0-9A-Z]([-_.a-z0-9A-Z]{0,61}[a-z0-9A-Z])?)?$
- pydantic model dyff.schema.platform.Labeled¶
Bases:
DyffSchemaBaseModel
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- class dyff.schema.platform.MeasurementLevel(value)¶
Bases:
str
,Enum
An enumeration.
- Dataset = 'Dataset'¶
- Instance = 'Instance'¶
- pydantic model dyff.schema.platform.MeasurementSpec¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field level: MeasurementLevel [Required]¶
Measurement level
- field name: str [Required]¶
Descriptive name of the Measurement.
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the measurement data. Instance-level measurements must include an _index_ field.
- pydantic model dyff.schema.platform.MethodBase¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field implementation: MethodImplementation [Required]¶
How the Method is implemented.
- field inputs: list[MethodInput] [Optional]¶
Input data entities consumed by the Method. Available at ctx.inputs(keyword)
- field modules: list[str] [Optional]¶
Modules to load into the analysis environment
- field name: str [Required]¶
Descriptive name of the Method.
- field output: MethodOutput [Required]¶
Specification of the Method output.
- field parameters: list[MethodParameter] [Optional]¶
Configuration parameters accepted by the Method. Values are available at ctx.args(keyword)
- field scope: MethodScope [Required]¶
The scope of the Method. The Method produces outputs that are specific to one entity of the type specified in the .scope field.
- pydantic model dyff.schema.platform.MethodImplementation¶
Bases:
DyffSchemaBaseModel
- field jupyterNotebook: MethodImplementationJupyterNotebook | None = None¶
Specification of a Jupyter notebook to run.
- field kind: str [Required]¶
The kind of implementation
- field pythonFunction: MethodImplementationPythonFunction | None = None¶
Specification of a Python function to call.
- field pythonRubric: MethodImplementationPythonRubric | None = None¶
@deprecated Specification of a Python Rubric to run.
- pydantic model dyff.schema.platform.MethodImplementationJupyterNotebook¶
Bases:
DyffSchemaBaseModel
- field notebookModule: str [Required]¶
ID of the Module that contains the notebook file. This does not add the Module as a dependency; you must do that separately.
- field notebookPath: str [Required]¶
Path to the notebook file relative to the Module root directory.
- class dyff.schema.platform.MethodImplementationKind(value)¶
Bases:
str
,Enum
An enumeration.
- JupyterNotebook = 'JupyterNotebook'¶
- PythonFunction = 'PythonFunction'¶
- PythonRubric = 'PythonRubric'¶
A Rubric generates an instance-level measurement, consuming a Dataset and an Evaluation.
Deprecated since version 0.8.0: Report functionality has been refactored into the Method/Measurement/Analysis apparatus. Creation of new Reports is disabled.
- pydantic model dyff.schema.platform.MethodImplementationPythonFunction¶
Bases:
DyffSchemaBaseModel
- field fullyQualifiedName: str [Required]¶
The fully-qualified name of the Python function to call.
- pydantic model dyff.schema.platform.MethodImplementationPythonRubric¶
Bases:
DyffSchemaBaseModel
A Rubric generates an instance-level measurement, consuming a Dataset and an Evaluation.
Deprecated since version 0.8.0: Report functionality has been refactored into the Method/Measurement/Analysis apparatus. Creation of new Reports is disabled.
- field fullyQualifiedName: str [Required]¶
The fully-qualified name of the Python Rubric to run.
- pydantic model dyff.schema.platform.MethodInput¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field keyword: str [Required]¶
The input is referred to by ‘keyword’ in the context of the method implementation.
- field kind: MethodInputKind [Required]¶
The kind of input artifact.
- class dyff.schema.platform.MethodInputKind(value)¶
Bases:
str
,Enum
An enumeration.
- Dataset = 'Dataset'¶
- Evaluation = 'Evaluation'¶
- Measurement = 'Measurement'¶
- Report = 'Report'¶
Deprecated since version 0.8.0: The Report entity is deprecated, but we accept it as an analysis input for backward compatibility.
- pydantic model dyff.schema.platform.MethodOutput¶
Bases:
DyffSchemaBaseModel
- field kind: MethodOutputKind [Required]¶
The kind of output artifact
- field measurement: MeasurementSpec | None = None¶
Specification of a Measurement output.
- field safetyCase: SafetyCaseSpec | None = None¶
Specification of a SafetyCase output.
- class dyff.schema.platform.MethodOutputKind(value)¶
Bases:
str
,Enum
An enumeration.
- Measurement = 'Measurement'¶
- SafetyCase = 'SafetyCase'¶
- pydantic model dyff.schema.platform.MethodParameter¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field keyword: str [Required]¶
The parameter is referred to by ‘keyword’ in the context of the method implementation.
- class dyff.schema.platform.MethodScope(value)¶
Bases:
str
,Enum
An enumeration.
- Evaluation = 'Evaluation'¶
- InferenceService = 'InferenceService'¶
- pydantic model dyff.schema.platform.ModelArtifact¶
Bases:
DyffSchemaBaseModel
- field huggingFaceCache: ModelArtifactHuggingFaceCache | None = None¶
Model stored in a HuggingFace cache
- field kind: ModelArtifactKind [Required]¶
How the model data is represented
- pydantic model dyff.schema.platform.ModelArtifactHuggingFaceCache¶
Bases:
DyffSchemaBaseModel
- field repoID: str [Required]¶
Name of the model in the HuggingFace cache
- field revision: str [Required]¶
Model revision
- snapshot_path() str ¶
- class dyff.schema.platform.ModelArtifactKind(value)¶
Bases:
str
,Enum
An enumeration.
- HuggingFaceCache = 'HuggingFaceCache'¶
- Mock = 'Mock'¶
- pydantic model dyff.schema.platform.ModelBase¶
Bases:
DyffSchemaBaseModel
- field artifact: ModelArtifact [Required]¶
How the model data is represented
- field name: str [Required]¶
The name of the Model.
- field storage: ModelStorage [Required]¶
How the model data is stored
- pydantic model dyff.schema.platform.ModelResources¶
Bases:
DyffSchemaBaseModel
- field memory: ConstrainedStrValue | None = None¶
Amount of memory required to run the model on CPU, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- field storage: ConstrainedStrValue [Required]¶
Amount of storage required for packaged model, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- pydantic model dyff.schema.platform.ModelSource¶
Bases:
DyffSchemaBaseModel
- field gitLFS: ModelSourceGitLFS | None = None¶
Specification of a Git LFS source
- field huggingFaceHub: ModelSourceHuggingFaceHub | None = None¶
Specification of a HuggingFace Hub source
- field kind: ModelSourceKinds [Required]¶
The kind of model source
- field openLLM: ModelSourceOpenLLM | None = None¶
Specification of an OpenLLM source
- pydantic model dyff.schema.platform.ModelSourceGitLFS¶
Bases:
DyffSchemaBaseModel
- field url: HttpUrl [Required]¶
The URL of the Git LFS repository
- Constraints:
minLength = 1
maxLength = 2083
format = uri
- pydantic model dyff.schema.platform.ModelSourceHuggingFaceHub¶
Bases:
DyffSchemaBaseModel
These arguments are forwarded to huggingface_hub.snapshot_download()
- field allowPatterns: list[str] | None = None¶
- field ignorePatterns: list[str] | None = None¶
- field repoID: str [Required]¶
- field revision: str [Required]¶
- class dyff.schema.platform.ModelSourceKinds(value)¶
Bases:
str
,Enum
An enumeration.
- GitLFS = 'GitLFS'¶
- HuggingFaceHub = 'HuggingFaceHub'¶
- Mock = 'Mock'¶
- OpenLLM = 'OpenLLM'¶
- Upload = 'Upload'¶
- pydantic model dyff.schema.platform.ModelSourceOpenLLM¶
Bases:
DyffSchemaBaseModel
- field modelID: str [Required]¶
The specific model identifier (c.f. ‘openllm build … –model-id <modelId>’)
- field modelKind: str [Required]¶
The kind of model (c.f. ‘openllm build <modelKind>’)
- field modelVersion: str [Required]¶
The version of the model (e.g., a git commit hash)
- pydantic model dyff.schema.platform.ModelSpec¶
Bases:
ModelBase
- field accelerators: list[Accelerator] | None = None¶
Accelerator hardware that is compatible with the model.
- field resources: ModelResources [Required]¶
Resource requirements of the model.
- field source: ModelSource [Required]¶
Source from which the model artifact was obtained
- pydantic model dyff.schema.platform.ModelStorage¶
Bases:
DyffSchemaBaseModel
- field medium: ModelStorageMedium [Required]¶
Storage medium
- class dyff.schema.platform.ModelStorageMedium(value)¶
Bases:
str
,Enum
An enumeration.
- Mock = 'Mock'¶
- ObjectStorage = 'ObjectStorage'¶
- PersistentVolume = 'PersistentVolume'¶
- pydantic model dyff.schema.platform.ModuleBase¶
Bases:
DyffSchemaBaseModel
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the Module implementation
- Constraints:
minItems = 1
- field name: str [Required]¶
The name of the Module
- pydantic model dyff.schema.platform.QueryableDyffEntity¶
Bases:
DyffSchemaBaseModel
- field id: str [Required]¶
Unique identifier of the entity
- field name: str [Required]¶
Descriptive name of the resource
- pydantic model dyff.schema.platform.ReportBase¶
Bases:
DyffSchemaBaseModel
- field evaluation: str [Required]¶
The evaluation (and corresponding output data) to run the report on.
- field modules: list[str] [Optional]¶
Additional modules to load into the report environment
- field rubric: str [Required]¶
The scoring rubric to apply (e.g., ‘classification.TopKAccuracy’).
- class dyff.schema.platform.Resources(value)¶
Bases:
str
,Enum
The resource names corresponding to entities that have API endpoints.
- ALL = '*'¶
- Analysis = 'analyses'¶
- Audit = 'audits'¶
- AuditProcedure = 'auditprocedures'¶
- Concern = 'concerns'¶
- DataSource = 'datasources'¶
- Dataset = 'datasets'¶
- Descriptor = 'descriptors'¶
- Documentation = 'documentation'¶
- Evaluation = 'evaluations'¶
- Family = 'families'¶
- Hazard = 'hazards'¶
- History = 'histories'¶
- InferenceService = 'inferenceservices'¶
- InferenceSession = 'inferencesessions'¶
- Measurement = 'measurements'¶
- Method = 'methods'¶
- Model = 'models'¶
- Module = 'modules'¶
- Report = 'reports'¶
- Revision = 'revisions'¶
- SafetyCase = 'safetycases'¶
- Score = 'scores'¶
- Task = 'tasks'¶
Deprecated since version 0.5.0: The Task resource no longer exists, but removing this enum entry breaks existing API keys.
- UseCase = 'usecases'¶
- pydantic model dyff.schema.platform.Revision¶
Bases:
DyffEntity
,RevisionMetadata
- field entity: Audit | AuditProcedure | DataSource | Dataset | Evaluation | Family | Hazard | History | InferenceService | InferenceSession | Measurement | Method | Model | Module | Report | SafetyCase | UseCase [Required]¶
The associated entity data
- field kind: Literal['Revision'] = 'Revision'¶
- pydantic model dyff.schema.platform.RevisionMetadata¶
Bases:
DyffSchemaBaseModel
- field creationTime: datetime = 'The time when the Revision was created'¶
- pydantic model dyff.schema.platform.Role¶
Bases:
DyffSchemaBaseModel
A set of permissions.
- field grants: list[AccessGrant] [Optional]¶
The permissions granted to the role.
- pydantic model dyff.schema.platform.SafetyCaseSpec¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field name: str [Required]¶
Descriptive name of the SafetyCase.
- pydantic model dyff.schema.platform.SchemaAdapter¶
Bases:
DyffSchemaBaseModel
- field configuration: dict[str, Any] | None = None¶
Configuration for the schema adapter. Must be encodable as JSON.
- field kind: str [Required]¶
Name of a schema adapter available on the platform
- pydantic model dyff.schema.platform.ScoreData¶
Bases:
ScoreSpec
ScoreData is an “instance” of a ScoreSpec containing the concrete measured value for the score.
- field analysis: str [Required]¶
The Analysis that generated the current score instance.
- field metadata: ScoreMetadata [Required]¶
Metadata about the score; used for indexing.
- field quantity: float [Required]¶
The numeric quantity associated with the score.
- field quantityString: str [Required]¶
The formatted string representation of .quantity, after processing with the .format specification.
- field text: str [Required]¶
A short text description of what the quantity means.
- Constraints:
maxLength = 280
- pydantic model dyff.schema.platform.ScoreMetadata¶
Bases:
DyffSchemaBaseModel
Metadata about a Score entity.
- field refs: ScoreMetadataRefs [Required]¶
References to other related Dyff entities.
- pydantic model dyff.schema.platform.ScoreMetadataRefs¶
Bases:
AnalysisScope
References to other Dyff entities related to a Score.
- field method: str [Required]¶
The Method that generates the score.
- pydantic model dyff.schema.platform.ScoreSpec¶
Bases:
DyffSchemaBaseModel
- field format: str = '{quantity:.1f}'¶
A Python ‘format’ string describing how to render the score as a string. You must use the keyword ‘quantity’ in the format string, and you may use ‘unit’ as well (e.g., ‘{quantity:.2f} {unit}’). It is strongly recommended that you limit the output precision appropriately; use ‘:.0f’ for integer-valued scores.
- Constraints:
pattern = ^(.*[^{])?[{]quantity(:[^}]*)?[}]([^}].*)?$
- field maximum: float | None = None¶
The maximum possible value, if known.
- field minimum: float | None = None¶
The minimum possible value, if known.
- field name: str [Required]¶
The name of the score. Used as a key for retrieving score data. Must be unique within the Method context.
- Constraints:
maxLength = 127
pattern = ^[a-zA-Z_][a-zA-Z0-9_]*$
- field priority: Literal['primary', 'secondary'] = 'primary'¶
The ‘primary’ score will be displayed in any UI widgets that expect a single score. There must be exactly 1 primary score.
- field summary: str [Required]¶
A short text description of what the score measures.
- Constraints:
maxLength = 280
- field title: str [Required]¶
The title text to use when displaying score information.
- Constraints:
maxLength = 140
- field unit: str | None = None¶
The unit of measure, if applicable (e.g., ‘meters’, ‘kJ/g’). Use standard SI abbreviations where possible for better indexing.
- field valence: Literal['positive', 'negative', 'neutral'] = 'neutral'¶
A score has ‘positive’ valence if ‘more is better’, ‘negative’ valence if ‘less is better’, and ‘neutral’ valence if ‘better’ is not meaningful for this score.
- classmethod format_quantity(format: str, quantity: float, *, unit: str | None = None) str ¶
- quantity_string(quantity: float) str ¶
Formats the given quantity as a string, according to the formatting information stored in this ScoreSpec.
- pydantic model dyff.schema.platform.Status¶
Bases:
DyffSchemaBaseModel
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field status: str = None¶
Top-level resource status (assigned by system)
- pydantic model dyff.schema.platform.StorageSignedURL¶
Bases:
DyffSchemaBaseModel
- field headers: dict[str, str] [Optional]¶
Mandatory headers that must be passed with the request
- field method: str [Required]¶
The HTTP method applicable to the URL
- field url: str [Required]¶
The signed URL
- platform.TagName = <class 'dyff.schema.v0.r1.platform.ConstrainedStrValue'>¶
- pydantic model dyff.schema.platform.TaskSchema¶
Bases:
DyffSchemaBaseModel
- field input: DataSchema [Required]¶
- field objective: str [Required]¶
- field output: DataSchema [Required]¶