MindFrame: Ethical, Transparent, and Secure
MindFrame is a platform for ethical, psychologically informed conversational agents. The project is designed to facilitate research and delivery of clincial tools in a transparent, auditable implementation, focused on safety, accessibility, and participant dignity.
Ethical and Inclusive by Design
- All interventions are structured using open, human-readable templates, in an open format designed for collaboration and transparency.
- Interventions are developed following established psychological methods and peer-reviewed models.
- The platform supports transparent oversight and version control of all intervention content.
- We prioritise accessibility, including mobile-first design and multilingual support.
- Participant consent, autonomy, and privacy are central to every design decision.
Responsible AI Usage
MindFrame uses large language models (LLMs) provided by OpenAI through Microsoft Azure’s enterprise agreement. These models operate in inference-only mode: no user data is used to train or improve the models, and no inputs or outputs are retained by OpenAI. All data processing remains within a European cloud infrastructure, compliant with GDPR and relevant university data governance frameworks.
Accessible and User-Friendly
Interventions are delivered via familiar, mobile-friendly chat interfaces. Participants can interact with the system through the web, Telegram, or Microsoft Teams. We are actively developing support for the Matrix protocol (via Element) to provide end-to-end encrypted delivery within a closed ecosystem, ensuring a higher standard of confidentiality and control.
Transparent Implementation
The platform is open source (GitHub) and built using widely used, peer-reviewed software tools. Intervention logic is declared in Markdown and YAML files, which are version controlled and easy to audit. System behaviour is deterministic wherever possible, and all AI responses are logged with full context (where permitted) for traceability.
Contact & Further Information
We welcome enquiries from institutional review boards, governance bodies, and funding agencies. Please contact Dr. Ben Whalley for further information or access to technical documentation, data protection impact assessments (DPIAs), or example interventions.