Responsible use and privacy¶
Biometrics are powerful identifiers and, unlike a password, you cannot rotate the underlying signal. That asymmetry shapes how we build, deploy, and document BioTone. This page is a public summary of the principles we hold ourselves to. None of it is legal advice.
Consent and disclosure¶
- Enrollment is opt-in. A user — not an operator on their behalf — must take an explicit action to enroll a biometric.
- Purpose limitation. A biometric enrolled for authentication is not silently re-used for identification at scale, training, or analytics, unless the user is told and agrees.
- Plain-language disclosure. The first time a user is asked to enroll, they should be told what is captured, where it is stored, how long it is retained, and how to revoke it. Operators using BioTone are expected to provide that disclosure in their own product surfaces; we provide template copy and require operators to attest that they have a lawful basis to capture biometric data.
Data minimization¶
- Templates over raw images. Wherever the pipeline allows, we store and transmit a derived numeric template rather than a raw image or recording. See Privacy-Preserving Biometrics.
- No raw retention by default for analysis-only calls. When the developer API is used purely for embeddings, segmentation, or quality scoring, the input sample is processed and discarded.
- Opt-in for any training use. Customer data is never used to train shipped models without an explicit, separately-signed opt-in. Live BioTone accounts default to no training use.
Fairness and demographic performance¶
- Subgroup reporting. When we publish accuracy numbers for a modality, the goal is to publish both an aggregate FMR/FNMR and subgroup breakdowns (age, gender, skin tone where applicable, language for voice). Aggregate-only reporting hides exactly the failure modes biometrics need to be measured on.
- Acknowledging limits. Some demographic differentials remain open research problems across the entire industry. Where we know a deployment will be sensitive to those differentials, we say so and recommend additional safeguards (multi-factor, easier fallback paths) rather than insisting that the model is "good enough".
For background, see Bias and Fairness in Biometrics.
Spoofing, attacks, and the limits of any system¶
- No biometric system is unattackable. Photos, masks, replays, injection attacks, and synthetic media (voice clones, deepfakes) are real adversaries. The right response is layered: modality-specific PAD, multimodal fusion, device attestation, policy-level rate limits, and a credible incident path. See Anti-Spoofing Techniques.
- Multi-signal by default. BioTone is designed so that the failure of a single capture or a single spoof does not unlock an account. Operators can configure this trade-off per policy.
Use cases we will not knowingly support¶
- Mass surveillance — covert identification of individuals at scale without consent, in a public or quasi-public setting.
- Coercive identification of minors or vulnerable populations outside of a clear legal framework with appropriate consent and oversight.
- Predictive judgments from biometrics about characteristics that are not biometric in nature (e.g. emotion, intent, trustworthiness, sexuality). The science does not support these uses; we do not build for them.
If you have a question about whether a use case is in scope, contact us through the main BioTone site before building on top of the platform.
Reporting a vulnerability¶
If you believe you have found a security or privacy issue affecting BioTone, the public site, the developer portal, or this wiki, please contact us via the responsible disclosure path on the main BioTone site. We appreciate coordinated disclosure and will acknowledge reports.