Platform types¶
Core platform types¶
The core platform types are the ones that you create, manage, and query through
the Dyff API. The core types describe the steps of the auditing workflow that
produces audit reports from models and data. Instances of core types all have a
unique .id
, belong to an .account
, and have additional metadata fields
that are updated by the platform. In particular, the .status
and .reason
fields tell you how the work is proceeding and whether it is complete or
encountered an error.
- pydantic model dyff.schema.platform.Audit¶
Bases:
DyffEntity
An instance of applying an AuditProcedure to an InferenceService.
- field auditProcedure: str [Required]¶
The AuditProcedure to run.
- field inferenceService: str [Required]¶
The InferenceService to audit.
- field kind: Literal['Audit'] = 'Audit'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.AuditProcedure¶
Bases:
DyffEntity
An audit procedure that can be run against a set of evaluation reports.
- field kind: Literal['AuditProcedure'] = 'AuditProcedure'¶
- field name: str [Required]¶
- field requirements: list[AuditRequirement] [Optional]¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Dataset¶
Bases:
DyffEntity
,DatasetBase
An “ingested” data set in our standardized PyArrow format.
- field kind: Literal['Dataset'] = 'Dataset'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.DataSource¶
Bases:
DyffEntity
A source of raw data from which a Dataset can be built.
- field kind: Literal['DataSource'] = 'DataSource'¶
- field name: str [Required]¶
- field source: str | None = None¶
- field sourceKind: str [Required]¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Evaluation¶
Bases:
DyffEntity
,EvaluationBase
A description of how to run an InferenceService on a Dataset to obtain a set of evaluation results.
- field inferenceSession: InferenceSessionSpec [Required]¶
Specification of the InferenceSession that will perform inference for the evaluation.
- field inferenceSessionReference: str | None = None¶
ID of a running inference session that will be used for the evaluation instead of starting a new one.
- field kind: Literal['Evaluation'] = 'Evaluation'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.InferenceService¶
Bases:
DyffEntity
,InferenceServiceSpec
An InferenceService is an inference model packaged as a Web service.
- field kind: Literal['InferenceService'] = 'InferenceService'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.InferenceSession¶
Bases:
DyffEntity
,InferenceSessionSpec
An InferenceSession is a deployment of an InferenceService that exposes an API for interactive queries.
- field kind: Literal['InferenceSession'] = 'InferenceSession'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Measurement¶
Bases:
DyffEntity
,MeasurementSpec
,Analysis
- field kind: Literal['Measurement'] = 'Measurement'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Method¶
Bases:
DyffEntity
,MethodBase
- field kind: Literal['Method'] = 'Method'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Model¶
Bases:
DyffEntity
,ModelSpec
A Model is the “raw” form of an inference model, from which one or more InferenceServices may be built.
- field kind: Literal['Model'] = 'Model'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Module¶
Bases:
DyffEntity
,ModuleBase
An extension module that can be loaded into Report workflows.
- field kind: Literal['Module'] = 'Module'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.Report¶
Bases:
DyffEntity
,ReportBase
A Report transforms raw model outputs into some useful statistics.
Deprecated since version 0.8.0: Report functionality has been refactored into the Method/Measurement/Analysis apparatus. Creation of new Reports is disabled.
- field dataset: str [Required]¶
The input dataset.
- field datasetView: DataView | None = None¶
View of the input dataset required by the report (e.g., ground-truth labels).
- field evaluationView: DataView | None = None¶
View of the evaluation output data required by the report.
- field inferenceService: str [Required]¶
The inference service used in the evaluation
- field kind: Literal['Report'] = 'Report'¶
- field model: str | None = None¶
The model backing the inference service, if applicable
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.SafetyCase¶
Bases:
DyffEntity
,SafetyCaseSpec
,Analysis
- field kind: Literal['SafetyCase'] = 'SafetyCase'¶
- dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
Additional Platform Types¶
- platform.APIFunctions = <enum 'APIFunctions'>¶
- pydantic model dyff.schema.platform.APIKey¶
Bases:
DyffSchemaBaseModel
A description of a set of permissions granted to a single subject (either an account or a workload).
Dyff API clients authenticate with a token that contains a cryptographically signed APIKey.
- field created: datetime [Required]¶
When the APIKey was created. Maps to JWT ‘iat’ claim.
- field expires: datetime [Required]¶
When the APIKey expires. Maps to JWT ‘exp’ claim.
- field grants: list[AccessGrant] [Optional]¶
AccessGrants associated with the APIKey
- field id: str [Required]¶
Unique ID of the resource. Maps to JWT ‘jti’ claim.
- field secret: str | None = None¶
For account keys: a secret value to check when verifying the APIKey
- field subject: str [Required]¶
Subject of access grants (‘<kind>/<id>’). Maps to JWT ‘sub’ claim.
- pydantic model dyff.schema.platform.Accelerator¶
Bases:
DyffSchemaBaseModel
- field gpu: AcceleratorGPU | None = None¶
GPU accelerator options
- field kind: str [Required]¶
The kind of accelerator; available kinds are {{GPU}}
- pydantic model dyff.schema.platform.AcceleratorGPU¶
Bases:
DyffSchemaBaseModel
- field count: int = 1¶
Number of GPUs required.
- field hardwareTypes: list[str] [Required]¶
Acceptable GPU hardware types.
- Constraints:
minItems = 1
- field memory: ConstrainedStrValue | None = None¶
[DEPRECATED] Amount of GPU memory required, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- pydantic model dyff.schema.platform.AccessGrant¶
Bases:
DyffSchemaBaseModel
Grants access to call particular functions on particular instances of particular resource types.
Access grants are additive; the subject of a set of grants has permission to do something if any part of any of those grants gives the subject that permission.
- field accounts: list[str] [Optional]¶
The access grant applies to all resources owned by the listed accounts
- field entities: list[str] [Optional]¶
The access grant applies to all resources with IDs listed in ‘entities’
- field functions: list[APIFunctions] [Required]¶
List of functions on those resources to which the grant applies
- field resources: list[Resources] [Required]¶
List of resource types to which the grant applies
- pydantic model dyff.schema.platform.Account¶
Bases:
DyffSchemaBaseModel
An Account in the system.
All entities are owned by an Account.
- field creationTime: datetime | None = None¶
- field id: str | None = None¶
- field name: str [Required]¶
- pydantic model dyff.schema.platform.Analysis¶
Bases:
AnalysisBase
- field method: ForeignMethod [Required]¶
The analysis Method to run.
- pydantic model dyff.schema.platform.AnalysisArgument¶
Bases:
DyffSchemaBaseModel
- field keyword: str [Required]¶
The ‘keyword’ of the corresponding ModelParameter.
- field value: str [Required]¶
The value of of the argument. Always a string; implementations are responsible for parsing.
- pydantic model dyff.schema.platform.AnalysisBase¶
Bases:
DyffSchemaBaseModel
- field arguments: list[AnalysisArgument] [Optional]¶
Arguments to pass to the Method implementation.
- field inputs: list[AnalysisInput] [Optional]¶
Mapping of keywords to data entities.
- field scope: AnalysisScope [Optional]¶
The specific entities to which the analysis results apply. At a minimum, the field corresponding to method.scope must be set.
- pydantic model dyff.schema.platform.AnalysisInput¶
Bases:
DyffSchemaBaseModel
- field entity: str [Required]¶
The ID of the entity whose data should be made available as ‘keyword’.
- field keyword: str [Required]¶
The ‘keyword’ specified for this input in the MethodSpec.
- pydantic model dyff.schema.platform.AnalysisOutputQueryFields¶
Bases:
DyffSchemaBaseModel
- field analysis: str = None¶
ID of the Analysis that produced the output.
- field inputs: list[str] = None¶
IDs of resources that were inputs to the Analysis.
- field method: QueryableDyffEntity [Required]¶
Identifying information about the Method that was run to produce the output.
- pydantic model dyff.schema.platform.AnalysisScope¶
Bases:
DyffSchemaBaseModel
The specific entities to which the analysis applies.
When applying an InferenceService-scoped Method, at least
.inferenceService
must be set.When applying an Evaluation-scoped Method, at least
.evaluation
,.inferenceService
, and.dataset
must be set.- field dataset: str | None = None¶
The Dataset to which the analysis applies.
- field evaluation: str | None = None¶
The Evaluation to which the analysis applies.
- field inferenceService: str | None = None¶
The InferenceService to which the analysis applies.
- field model: str | None = None¶
The Model to which the analysis applies.
- pydantic model dyff.schema.platform.Annotation¶
Bases:
DyffSchemaBaseModel
- field key: str [Required]¶
The annotation key. A DNS label with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Names prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
See https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations for detailed naming rules.
- Constraints:
maxLength = 253
pattern = ^([a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?(.[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?)*/)?[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?$
- field value: str [Required]¶
The annotation value. An arbitrary string.
- pydantic model dyff.schema.platform.ArchiveFormat¶
Bases:
DyffSchemaBaseModel
Specification of the archives that comprise a DataSource.
- field format: str [Required]¶
- field name: str [Required]¶
- pydantic model dyff.schema.platform.Artifact¶
Bases:
DyffSchemaBaseModel
- field kind: str | None = None¶
The kind of artifact
- field path: str [Required]¶
The relative path of the artifact within the tree
- pydantic model dyff.schema.platform.ArtifactURL¶
Bases:
DyffSchemaBaseModel
- field signedURL: StorageSignedURL [Required]¶
- pydantic model dyff.schema.platform.AuditRequirement¶
Bases:
DyffSchemaBaseModel
An evaluation report that must exist in order to apply an AuditProcedure.
- field dataset: str [Required]¶
- field rubric: str [Required]¶
- pydantic model dyff.schema.platform.ContainerImageSource¶
Bases:
DyffSchemaBaseModel
- field digest: str [Required]¶
The digest of the image. The image is always pulled by digest, even if ‘tag’ is specified.
- Constraints:
pattern = ^sha256:[0-9a-f]{64}$
- field host: str [Required]¶
The host of the container image registry.
- field name: str [Required]¶
The name of the image
- Constraints:
pattern = ^[a-z0-9]+((.|_|__|-+)[a-z0-9]+)*(/[a-z0-9]+((.|_|__|-+)[a-z0-9]+)*)*$
- field tag: ConstrainedStrValue | None = None¶
The tag of the image. Although the image is always pulled by digest, including the tag is strongly recommended as it is often the main source of versioning information.
- Constraints:
maxLength = 317
pattern = ^[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}$
- url() str ¶
- validator validate_host » host¶
- pydantic model dyff.schema.platform.DataSchema¶
Bases:
DyffSchemaBaseModel
- field arrowSchema: str [Required]¶
The schema in Arrow format, encoded with dyff.schema.arrow.encode_schema(). This is required, but can be populated from a DyffDataSchema.
- field dyffSchema: DyffDataSchema | None = None¶
The schema in DyffDataSchema format
- field jsonSchema: dict[str, Any] | None = None¶
The schema in JSON Schema format
- static from_model(model: Type[DyffSchemaBaseModel]) DataSchema ¶
- static make_input_schema(schema: Schema | Type[DyffSchemaBaseModel] | DyffDataSchema) DataSchema ¶
Construct a complete
DataSchema
for inference inputs.This function will add required special fields for input data and then convert the augmented schema as necessary to populate at least the required
arrowSchema
field in the resultingDataSchema
.
- static make_output_schema(schema: Schema | Type[DyffSchemaBaseModel] | DyffDataSchema) DataSchema ¶
Construct a complete
DataSchema
for inference inputs.This function will add required special fields for input data and then convert the augmented schema as necessary to populate at least the required
arrowSchema
field in the resultingDataSchema
.
- pydantic model dyff.schema.platform.DataView¶
Bases:
DyffSchemaBaseModel
- field adapterPipeline: list[SchemaAdapter] | None = None¶
Adapter pipeline to apply to produce the view
- field id: str [Required]¶
Unique ID of the DataView
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the output of this view
- field viewOf: str [Required]¶
ID of the resource that this is a view of
- pydantic model dyff.schema.platform.DatasetBase¶
Bases:
DyffSchemaBaseModel
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the dataset
- Constraints:
minItems = 1
- field name: str [Required]¶
The name of the Dataset
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the dataset
- pydantic model dyff.schema.platform.DatasetFilter¶
Bases:
DyffSchemaBaseModel
A rule for restrcting which instances in a Dataset are returned.
- field field: str [Required]¶
- field relation: str [Required]¶
- field value: str [Required]¶
- pydantic model dyff.schema.platform.Digest¶
Bases:
DyffSchemaBaseModel
- field md5: str | None = None¶
md5 digest of artifact data
- pydantic model dyff.schema.platform.Documentation¶
Bases:
SchemaVersion
,DocumentationBase
- pydantic model dyff.schema.platform.DocumentationBase¶
Bases:
DyffSchemaBaseModel
- field fullPage: str | None = None¶
Long-form documentation. Interpreted as Markdown. There are no length constraints, but be reasonable.
- field summary: str | None = None¶
A brief summary, suitable for display in small UI elements. Interpreted as Markdown. Excessively long summaries may be truncated in the UI, especially on small displays.
- field title: str | None = None¶
A short plain string suitable as a title or “headline”.
- pydantic model dyff.schema.platform.DyffDataSchema¶
Bases:
DyffSchemaBaseModel
- field components: list[str] [Required]¶
A list of named dyff data schemas. The final schema is the composition of these component schemas.
- Constraints:
minItems = 1
- field schemaVersion: Literal['0.1'] = '0.1'¶
The dyff schema version
- model_type() Type[DyffSchemaBaseModel] ¶
The composite model type.
- pydantic model dyff.schema.platform.DyffEntity¶
Bases:
Status
,Labeled
,SchemaVersion
,DyffModelWithID
- field annotations: list[Annotation] [Optional]¶
A set of key-value annotations for the resource. Used to attach arbitrary non-identifying metadata to resources. We follow the kubernetes annotation conventions closely.
See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
- field creationTime: datetime = None¶
Resource creation time (assigned by system)
- field kind: Literal['Analysis', 'Audit', 'AuditProcedure', 'DataSource', 'Dataset', 'Evaluation', 'Family', 'History', 'InferenceService', 'InferenceSession', 'Measurement', 'Method', 'Model', 'Module', 'Report', 'Revision', 'SafetyCase'] [Required]¶
- abstract dependencies() list[str] ¶
List of IDs of resources that this resource depends on.
The workflow cannot start until all dependencies have reached a success status. Workflows waiting for dependencies have
reason = UnsatisfiedDependency
. If any dependency reaches a failure status, this workflow will also fail withreason = FailedDependency
.
- abstract resource_allocation() ResourceAllocation | None ¶
Resource allocation required to run this workflow, if any.
- pydantic model dyff.schema.platform.DyffModelWithID¶
Bases:
DyffSchemaBaseModel
- field account: str [Required]¶
Account that owns the entity
- field id: str [Required]¶
Unique identifier of the entity
- platform.Entities = <enum 'Entities'>¶
- pydantic model dyff.schema.platform.EvaluationBase¶
Bases:
DyffSchemaBaseModel
- field dataset: str [Required]¶
The Dataset to evaluate on.
- field replications: int = 1¶
Number of replications to run.
- field workersPerReplica: int | None = None¶
Number of data workers per inference service replica.
- pydantic model dyff.schema.platform.ExtractorStep¶
Bases:
DyffSchemaBaseModel
Description of a step in the process of turning a hierarchical DataSource into a Dataset.
- field action: str [Required]¶
- field name: str | None = None¶
- field type: str | None = None¶
- pydantic model dyff.schema.platform.Family¶
Bases:
DyffEntity
,FamilyBase
,FamilyMembers
- field documentation: DocumentationBase [Optional]¶
Documentation of the resource family. The content is used to populate various views in the web UI.
- field kind: Literal['Family'] = 'Family'¶
- pydantic model dyff.schema.platform.FamilyBase¶
Bases:
DyffSchemaBaseModel
- field memberKind: FamilyMemberKind [Required]¶
The kind of resource that comprises the family.
- pydantic model dyff.schema.platform.FamilyMember¶
Bases:
FamilyMemberBase
- field creationTime: datetime = None¶
Tag creation time (assigned by system)
- field family: str [Required]¶
Identifier of the Family containing this tag.
- pydantic model dyff.schema.platform.FamilyMemberBase¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
A short description of the member. Interpreted as Markdown. This should include information about how this version of the resource is different from other versions.
- field name: ConstrainedStrValue [Required]¶
An interpretable identifier for the member that is unique in the context of the corresponding Family.
- Constraints:
maxLength = 317
pattern = ^[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}$
- field resource: str [Required]¶
ID of the resource this member references.
- platform.FamilyMemberKind = <enum 'FamilyMemberKind'>¶
- pydantic model dyff.schema.platform.FamilyMembers¶
Bases:
DyffSchemaBaseModel
- field members: dict[ConstrainedStrValue, FamilyMember] [Optional]¶
Mapping of names to IDs of member resources.
- pydantic model dyff.schema.platform.ForeignInferenceService¶
Bases:
DyffModelWithID
,InferenceServiceSpec
- pydantic model dyff.schema.platform.ForeignMethod¶
Bases:
DyffModelWithID
,MethodBase
- pydantic model dyff.schema.platform.ForeignModel¶
Bases:
DyffModelWithID
,ModelBase
- platform.Frameworks = <enum 'Frameworks'>¶
- pydantic model dyff.schema.platform.History¶
Bases:
DyffEntity
- field kind: Literal['History'] = 'History'¶
- field latest: str [Required]¶
The ID of the latest Revision
- field revisions: dict[str, RevisionMetadata] [Required]¶
The set of known Revisions
- pydantic model dyff.schema.platform.Identity¶
Bases:
DyffSchemaBaseModel
The identity of an Account according to one or more external identity providers.
- field google: str | None = None¶
- pydantic model dyff.schema.platform.InferenceInterface¶
Bases:
DyffSchemaBaseModel
- field endpoint: str [Required]¶
HTTP endpoint for inference.
- field inputPipeline: list[SchemaAdapter] | None = None¶
Input adapter pipeline.
- field outputPipeline: list[SchemaAdapter] | None = None¶
Output adapter pipeline.
- field outputSchema: DataSchema [Required]¶
Schema of the inference outputs.
- pydantic model dyff.schema.platform.InferenceServiceBase¶
Bases:
DyffSchemaBaseModel
- field builder: InferenceServiceBuilder | None = None¶
Configuration of the Builder used to build the service.
- field interface: InferenceInterface [Required]¶
How to move data in and out of the service.
- field name: str [Required]¶
The name of the service.
- field runner: InferenceServiceRunner | None = None¶
Configuration of the Runner used to run the service.
- pydantic model dyff.schema.platform.InferenceServiceBuilder¶
Bases:
DyffSchemaBaseModel
- field args: list[str] | None = None¶
- field kind: str [Required]¶
- pydantic model dyff.schema.platform.InferenceServiceRunner¶
Bases:
DyffSchemaBaseModel
- field accelerator: Accelerator | None = None¶
Optional accelerator hardware to use
- field args: list[str] | None = None¶
Command line arguments to forward to the runner
- field image: ContainerImageSource | None = None¶
The container image that implements the runner. This field is optional for schema backwards-compatibility, but creating new services with image=None will result in an error.
- field kind: InferenceServiceRunnerKind [Required]¶
- field resources: ModelResources [Required]¶
Resource requirements to run the service.
- platform.InferenceServiceRunnerKind = <enum 'InferenceServiceRunnerKind'>¶
- platform.InferenceServiceSources = <enum 'InferenceServiceSources'>¶
- pydantic model dyff.schema.platform.InferenceServiceSpec¶
Bases:
InferenceServiceBase
- field model: ForeignModel | None = None¶
The Model backing this InferenceService, if applicable.
- pydantic model dyff.schema.platform.InferenceSessionAndToken¶
Bases:
DyffSchemaBaseModel
- field inferencesession: InferenceSession [Required]¶
- field token: str [Required]¶
- pydantic model dyff.schema.platform.InferenceSessionBase¶
Bases:
DyffSchemaBaseModel
- field accelerator: Accelerator | None = None¶
Accelerator hardware to use.
- field expires: datetime | None = None¶
Expiration time for the session. Use of this field is recommended to avoid accidental compute costs.
- field replicas: int = 1¶
Number of model replicas
- field useSpotPods: bool = True¶
Use ‘spot pods’ for cheaper computation
- pydantic model dyff.schema.platform.InferenceSessionReference¶
Bases:
DyffSchemaBaseModel
- field interface: InferenceInterface [Required]¶
How to move data in and out of the service.
- field session: str [Required]¶
The ID of a running inference session.
- pydantic model dyff.schema.platform.InferenceSessionSpec¶
Bases:
InferenceSessionBase
- field inferenceService: ForeignInferenceService [Required]¶
InferenceService ID
- pydantic model dyff.schema.platform.Label¶
Bases:
DyffSchemaBaseModel
A key-value label for a resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- field key: ConstrainedStrValue [Required]¶
The label key is a DNS label with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
- Constraints:
maxLength = 317
pattern = ^([a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?(.[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?)*/)?[a-zA-Z0-9]([-a-zA-Z0-9]{0,61}[a-zA-Z0-9])?$
- field value: ConstrainedStrValue | None = None¶
The label value consists of alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
- Constraints:
maxLength = 63
pattern = ^([a-z0-9A-Z]([-_.a-z0-9A-Z]{0,61}[a-z0-9A-Z])?)?$
- platform.LabelKey = <class 'dyff.schema.v0.r1.platform.ConstrainedStrValue'>¶
- pydantic model dyff.schema.platform.Labeled¶
Bases:
DyffSchemaBaseModel
- field labels: dict[ConstrainedStrValue, ConstrainedStrValue | None] [Optional]¶
A set of key-value labels for the resource. Used to specify identifying attributes of resources that are meaningful to users but do not imply semantics in the dyff system.
The keys are DNS labels with an optional DNS domain prefix. For example: ‘my-key’, ‘your.com/key_0’. Keys prefixed with ‘dyff.io/’, ‘subdomain.dyff.io/’, etc. are reserved.
The label values are alphanumeric characters separated by ‘.’, ‘-’, or ‘_’.
We follow the kubernetes label conventions closely. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- platform.MeasurementLevel = <enum 'MeasurementLevel'>¶
- pydantic model dyff.schema.platform.MeasurementSpec¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field level: MeasurementLevel [Required]¶
Measurement level
- field name: str [Required]¶
Descriptive name of the Measurement.
- field schema_: DataSchema [Required] (alias 'schema')¶
Schema of the measurement data. Instance-level measurements must include an _index_ field.
- pydantic model dyff.schema.platform.MethodBase¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field implementation: MethodImplementation [Required]¶
How the Method is implemented.
- field inputs: list[MethodInput] [Optional]¶
Input data entities consumed by the Method. Available at ctx.inputs(keyword)
- field modules: list[str] [Optional]¶
Modules to load into the analysis environment
- field name: str [Required]¶
Descriptive name of the Method.
- field output: MethodOutput [Required]¶
Specification of the Method output.
- field parameters: list[MethodParameter] [Optional]¶
Configuration parameters accepted by the Method. Values are available at ctx.args(keyword)
- field scope: MethodScope [Required]¶
The scope of the Method. The Method produces outputs that are specific to one entity of the type specified in the .scope field.
- pydantic model dyff.schema.platform.MethodImplementation¶
Bases:
DyffSchemaBaseModel
- field jupyterNotebook: MethodImplementationJupyterNotebook | None = None¶
Specification of a Jupyter notebook to run.
- field kind: str [Required]¶
The kind of implementation
- field pythonFunction: MethodImplementationPythonFunction | None = None¶
Specification of a Python function to call.
- field pythonRubric: MethodImplementationPythonRubric | None = None¶
@deprecated Specification of a Python Rubric to run.
- pydantic model dyff.schema.platform.MethodImplementationJupyterNotebook¶
Bases:
DyffSchemaBaseModel
- field notebookModule: str [Required]¶
ID of the Module that contains the notebook file. This does not add the Module as a dependency; you must do that separately.
- field notebookPath: str [Required]¶
Path to the notebook file relative to the Module root directory.
- platform.MethodImplementationKind = <enum 'MethodImplementationKind'>¶
- pydantic model dyff.schema.platform.MethodImplementationPythonFunction¶
Bases:
DyffSchemaBaseModel
- field fullyQualifiedName: str [Required]¶
The fully-qualified name of the Python function to call.
- pydantic model dyff.schema.platform.MethodImplementationPythonRubric¶
Bases:
DyffSchemaBaseModel
A Rubric generates an instance-level measurement, consuming a Dataset and an Evaluation.
Deprecated since version 0.8.0: Report functionality has been refactored into the Method/Measurement/Analysis apparatus. Creation of new Reports is disabled.
- field fullyQualifiedName: str [Required]¶
The fully-qualified name of the Python Rubric to run.
- pydantic model dyff.schema.platform.MethodInput¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field keyword: str [Required]¶
The input is referred to by ‘keyword’ in the context of the method implementation.
- field kind: MethodInputKind [Required]¶
The kind of input artifact.
- platform.MethodInputKind = <enum 'MethodInputKind'>¶
- pydantic model dyff.schema.platform.MethodOutput¶
Bases:
DyffSchemaBaseModel
- field kind: MethodOutputKind [Required]¶
The kind of output artifact
- field measurement: MeasurementSpec | None = None¶
Specification of a Measurement output.
- field safetyCase: SafetyCaseSpec | None = None¶
Specification of a SafetyCase output.
- platform.MethodOutputKind = <enum 'MethodOutputKind'>¶
- pydantic model dyff.schema.platform.MethodParameter¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field keyword: str [Required]¶
The parameter is referred to by ‘keyword’ in the context of the method implementation.
- platform.MethodScope = <enum 'MethodScope'>¶
- pydantic model dyff.schema.platform.ModelArtifact¶
Bases:
DyffSchemaBaseModel
- field huggingFaceCache: ModelArtifactHuggingFaceCache | None = None¶
Model stored in a HuggingFace cache
- field kind: ModelArtifactKind [Required]¶
How the model data is represented
- pydantic model dyff.schema.platform.ModelArtifactHuggingFaceCache¶
Bases:
DyffSchemaBaseModel
- field repoID: str [Required]¶
Name of the model in the HuggingFace cache
- field revision: str [Required]¶
Model revision
- snapshot_path() str ¶
- platform.ModelArtifactKind = <enum 'ModelArtifactKind'>¶
- pydantic model dyff.schema.platform.ModelBase¶
Bases:
DyffSchemaBaseModel
- field artifact: ModelArtifact [Required]¶
How the model data is represented
- field name: str [Required]¶
The name of the Model.
- field storage: ModelStorage [Required]¶
How the model data is stored
- pydantic model dyff.schema.platform.ModelResources¶
Bases:
DyffSchemaBaseModel
- field memory: ConstrainedStrValue | None = None¶
Amount of memory required to run the model on CPU, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- field storage: ConstrainedStrValue [Required]¶
Amount of storage required for packaged model, in k8s Quantity notation
- Constraints:
pattern = ^(+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](+|-)?(([0-9]+(.[0-9]*)?)|(.[0-9]+))))?$
- pydantic model dyff.schema.platform.ModelSource¶
Bases:
DyffSchemaBaseModel
- field gitLFS: ModelSourceGitLFS | None = None¶
Specification of a Git LFS source
- field huggingFaceHub: ModelSourceHuggingFaceHub | None = None¶
Specification of a HuggingFace Hub source
- field kind: ModelSourceKinds [Required]¶
The kind of model source
- field openLLM: ModelSourceOpenLLM | None = None¶
Specification of an OpenLLM source
- pydantic model dyff.schema.platform.ModelSourceGitLFS¶
Bases:
DyffSchemaBaseModel
- field url: HttpUrl [Required]¶
The URL of the Git LFS repository
- Constraints:
minLength = 1
maxLength = 2083
format = uri
- pydantic model dyff.schema.platform.ModelSourceHuggingFaceHub¶
Bases:
DyffSchemaBaseModel
These arguments are forwarded to huggingface_hub.snapshot_download()
- field allowPatterns: list[str] | None = None¶
- field ignorePatterns: list[str] | None = None¶
- field repoID: str [Required]¶
- field revision: str [Required]¶
- platform.ModelSourceKinds = <enum 'ModelSourceKinds'>¶
- pydantic model dyff.schema.platform.ModelSourceOpenLLM¶
Bases:
DyffSchemaBaseModel
- field modelID: str [Required]¶
The specific model identifier (c.f. ‘openllm build … –model-id <modelId>’)
- field modelKind: str [Required]¶
The kind of model (c.f. ‘openllm build <modelKind>’)
- field modelVersion: str [Required]¶
The version of the model (e.g., a git commit hash)
- pydantic model dyff.schema.platform.ModelSpec¶
Bases:
ModelBase
- field accelerators: list[Accelerator] | None = None¶
Accelerator hardware that is compatible with the model.
- field resources: ModelResources [Required]¶
Resource requirements of the model.
- field source: ModelSource [Required]¶
Source from which the model artifact was obtained
- pydantic model dyff.schema.platform.ModelStorage¶
Bases:
DyffSchemaBaseModel
- field medium: ModelStorageMedium [Required]¶
Storage medium
- platform.ModelStorageMedium = <enum 'ModelStorageMedium'>¶
- pydantic model dyff.schema.platform.ModuleBase¶
Bases:
DyffSchemaBaseModel
- field artifacts: list[Artifact] [Required]¶
Artifacts that comprise the Module implementation
- Constraints:
minItems = 1
- field name: str [Required]¶
The name of the Module
- pydantic model dyff.schema.platform.QueryableDyffEntity¶
Bases:
DyffSchemaBaseModel
- field id: str [Required]¶
Unique identifier of the entity
- field name: str [Required]¶
Descriptive name of the resource
- pydantic model dyff.schema.platform.ReportBase¶
Bases:
DyffSchemaBaseModel
- field evaluation: str [Required]¶
The evaluation (and corresponding output data) to run the report on.
- field modules: list[str] [Optional]¶
Additional modules to load into the report environment
- field rubric: str [Required]¶
The scoring rubric to apply (e.g., ‘classification.TopKAccuracy’).
- platform.Resources = <enum 'Resources'>¶
- pydantic model dyff.schema.platform.Revision¶
Bases:
DyffEntity
,RevisionMetadata
- field entity: Audit | AuditProcedure | DataSource | Dataset | Evaluation | Family | History | InferenceService | InferenceSession | Measurement | Method | Model | Module | Report | SafetyCase [Required]¶
The associated entity data
- field kind: Literal['Revision'] = 'Revision'¶
- pydantic model dyff.schema.platform.RevisionMetadata¶
Bases:
DyffSchemaBaseModel
- field creationTime: datetime = 'The time when the Revision was created'¶
- pydantic model dyff.schema.platform.SafetyCaseSpec¶
Bases:
DyffSchemaBaseModel
- field description: str | None = None¶
Long-form description, interpreted as Markdown.
- field name: str [Required]¶
Descriptive name of the SafetyCase.
- pydantic model dyff.schema.platform.SchemaAdapter¶
Bases:
DyffSchemaBaseModel
- field configuration: dict[str, Any] | None = None¶
Configuration for the schema adapter. Must be encodable as JSON.
- field kind: str [Required]¶
Name of a schema adapter available on the platform
- pydantic model dyff.schema.platform.Status¶
Bases:
DyffSchemaBaseModel
- field reason: str | None = None¶
Reason for current status (assigned by system)
- field status: str = None¶
Top-level resource status (assigned by system)
- pydantic model dyff.schema.platform.StorageSignedURL¶
Bases:
DyffSchemaBaseModel
- field headers: dict[str, str] [Optional]¶
Mandatory headers that must be passed with the request
- field method: str [Required]¶
The HTTP method applicable to the URL
- field url: str [Required]¶
The signed URL
- platform.TagName = <class 'dyff.schema.v0.r1.platform.ConstrainedStrValue'>¶
- pydantic model dyff.schema.platform.TaskSchema¶
Bases:
DyffSchemaBaseModel
- field input: DataSchema [Required]¶
- field objective: str [Required]¶
- field output: DataSchema [Required]¶