dyff-api

dyff-api is the API server for the Dyff AI auditing platform.

Deploy the Helm chart:

helm install dyff-api oci://registry.gitlab.com/dyff/charts/dyff-api --version "0.12.0"

Signing key

You will need to create both a session key and a signing key. These can be created from the command line:

Warning

The signing key and session key must be kept secret.

import secrets

print(secrets.token_hex(64))
head /dev/urandom | tr -dc A-Za-z0-9 | head -c 64 ; echo

If a signing or session key is not specified, a random one is generated by the chart on installation. This is convenient for experimentation but not useful for production, as everyone’s API keys and sessions will expire whenever the application is updated.

To specify custom signing and session keys:

helm install dyff-api oci://registry.gitlab.com/dyff/charts/dyff-api \
  --version "0.12.0" \
  --set auth.signingSecret=3601c3e95389304cab964120479a99f7 \
  --set auth.sessionSecret=2580b3909c8d2f19932e19196e29532f

Ingress

In some load balancing configurations, you may need to add FORWARDED_ALLOW_IPS to make uvicorn happy. We think this is safe because all external connections go through nginx, and we only use the headers to determine whether a forwarded request should use https.

--set extraEnvVarsConfigMap.FORWARDED_ALLOW_IPS=\*
--set ingress.enabled=true
--set ingress.className=nginx
--set ingress.hosts[0].host=api.dyff.example.com

HTTPS

--set ingress.tls[0].secretName=api-dyff-example-com \
--set ingress.tls[0].hosts[0]=api.dyff.example.com

Authentication

--set auth.redirectUrl=https://api.dyff.example.com/auth/callback
}

Google OIDC connection

--set auth.google.clientId=XXXXXX \
--set auth.google.clientSecret=XXXXXX

Storage

Dyff uses S3 to store large files. Most cloud providers offer an S3-compatible objec storage API, so this decision makes it easy to run Dyff in different places with similar configuration settings.

Ensure the s3 storage backend is configured, and add the credentials for the bucket:

--set storage.backend=s3 \
--set storage.s3.accessKey=XXXXXX \
--set storage.s3.secretKey=XXXXXX

Many configurations will use a cloud-hosted S3 provider accessible from a public URL:

--set storage.s3.external.endpoint=http://s3.minio.dyff.local \
--set storage.s3.internal.enabled=false

In some configurations, it may be necessary to configure an internal S3 endpoint. This allows you to connect to object storage over a private network. Dyff supplies download links via presigned S3 URLs, so even when an internal endpoint is used, the external endpoint must be provided for the URLs to be generated correctly:

--set storage.s3.external.endpoint=http://s3.minio.dyff.local \
--set storage.s3.internal.enabled=true \
--set storage.s3.internal.endpoint=http://minio.minio.svc.cluster.local:9000

Once the S3 endpoint is configured, bucket URLs must be specified for all resources:

--set storage.urls.datasets=s3://<dyff-bucket>/datasets \
--set storage.urls.measurements=s3://<dyff-bucket>/measurements \
--set storage.urls.modules=s3://<dyff-bucket>/modules \
--set storage.urls.outputs=s3://<dyff-bucket>/outputs \
--set storage.urls.reports=s3://<dyff-bucket>/reports \
--set storage.urls.safetycases=s3://<dyff-bucket>/safetycases

Google Cloud Storage (GCS)

--set auth.google.clientId=XXXXXX \
--set auth.google.clientSecret=XXXXXX

More information

Find the complete listing of configuration settings on Artifact Hub.