Home / Legal Terms / AI Transparency
DFI Artificial Intelligence Transparency Statement
This artificial intelligence transparency statement was last changed on March 9, 2026.
Contents
A. Purpose
The Deep Foundations Institute (DFI) is committed to the ethical, responsible, and transparent use of Artificial Intelligence (AI) in support of its mission to advance the deep foundations industry.
This Transparency Statement outlines how AI technologies are used within DFI, the governance and safeguards in place, and DFI’s commitment to accountability, human oversight, and trust.
This AI Transparency Statement is intended for public disclosure and is published to inform members, partners, and the broader community about DFI’s use of artificial intelligence and the safeguards in place.
B. Scope of AI Use
DFI uses AI technologies in a controlled and risk-managed manner to support operational efficiency, research quality, and member engagement. Approved uses include:
- Enhancing staff productivity through approved and organization-managed tools (e.g. Microsoft Copilot).
- Supporting non-sensitive content creation and editing, such as grammar checking, formatting, summarization, and template generation.
- Assisting editorial and peer-review processes (e.g. plagiarism detection, language checks, and analytical support), while maintaining full human oversight and decision-making authority.
- Improving website functionality, analytics, accessibility, and content delivery through AI-enabled tools.
- Supporting internal administrative activities such as meeting notes, action summaries, and document analysis using approved tools.
B.1 Restricted Uses
DFI does not use AI technologies for:
- Autonomous decision-making that has legal, safety, reputational, or ethical consequences without human review.
- Processing personally identifiable information (PII), confidential data, or sensitive datasets unless explicitly authorized and protected under approved controls.
- Covert use of generative AI in meetings or recordings without notification and approval.
- Representing AI-generated content as human-authored work without appropriate disclosure.
C. AI Governance and Oversight
DFI maintains a formal AI governance framework aligned with internationally recognized standards, including:
- ISO/IEC 27001 (Information Security Management)
- ISO/IEC 42001 (Artificial Intelligence Management Systems)
All AI tools used by DFI are subject to risk assessment, approval, and ongoing monitoring.
DFI maintains an AI Tool Register that records approved tools, permitted use cases, and review status.
Oversight is provided by the AI Oversight Team (AIOT), which is responsible for:
- Reviewing and approving AI tools and use cases
- Monitoring compliance with policy and standards
- Overseeing AI-related risks, incidents, and grievances
- Advising leadership on responsible AI adoption
Human accountability remains paramount. AI systems are used only as decisionsupport tools and do not replace professional judgement.
D. Transparency and Disclosure
DFI is committed to openness regarding AI use and requires disclosure where AI materially contributes to:
- Publications, technical papers, and presentations
- External communications or public-facing content
- Editorial or review processes.
Submissions or outputs generated with substantial AI assistance must clearly acknowledge such use.
E. Training and Awareness
All staff and relevant stakeholders engaging with AI must complete mandatory AI Awareness Training, which covers:
- Responsible and ethical AI use
- Data protection and privacy obligations
- AI limitations, bias risks, and misinformation
- Human oversight and accountability requirements
Training is refreshed at least annually or following significant policy or system changes.
F. Grievances and Incident Management
DFI provides clear and transparent mechanisms for raising concerns related to AI use. AI-related grievances may be submitted through a formal grievance process and are investigated impartially.
Grievances may be submitted via the online submission form.
AI-related incidents (e.g. misuse, data exposure, or system failure) are managed through a structured incident response framework.
All grievances and incidents are recorded in dedicated registers and reviewed to support continuous improvement.
G. Continuous Improvement
DFI recognizes that AI technologies and expectations continue to evolve. This Transparency Statement, together with the DFI AI Usage Policy, is reviewed regularly to reflect emerging risks, standards, and stakeholder expectations.
Through this commitment, DFI seeks to demonstrate leadership in responsible AI governance and to maintain the confidence of its members, partners, and the broader community.
