AI Policy

AI Usage Regulations

1. Purpose and Audience

Ceylon Logica Institute and Consultancy entities strictly employ generative artificial intelligence systems as supportive instruments that enhance our human-led scholarship's clarity, efficiency, and reach. We pair these ethical guidelines with structured staff training, including prompt design, source verification, fact-checking, and bias mitigation, so that AI increases efficiency without compromising research quality.

This policy shows clients, academic peers, policymakers, and the broader public that every output bearing our name remains grounded in rigorous research, critical judgment, and professional ethics. By explaining when AI may assist our work and when it may not, we provide a transparent basis for trust while reaffirming that final responsibility for accuracy, insight, and policy relevance always rest with our researchers.

2. Scope

The policy applies to all personnel associated with the Ceylon Logica Institute and Consultancy entities—including employees, fellows, consultants, and interns—regardless of contract type or location. It governs the selection, configuration, and day-to-day use of any third party or in-house AI service—large-language models, generative-image or video tools, translation engines, information-retrieval agents, style-refinement software, and comparable applications—whenever these tools support research, writing, data analysis, communications, or internal administration.

It also covers handling data fed to or produced by such systems, ensuring that confidentiality, privacy, security, and intellectual property obligations are upheld throughout every AI-assisted workflow.

3. Commitment to Training and Transparency

Ceylon Logica Institute and Consultancy entities are committed to a structured, ethical, and cost-efficient adoption of AI. We therefore

  • invest in targeted staff training on prompt engineering, source verification, fact checking, and bias mitigation so that AI augments rather than diminishes research quality;

  • maintain clear internal guidelines that encourage responsible experimentation instead of blanket prohibitions, allowing our teams to realize AI’s advantages while guarding against misuse;

  • practise external transparency by openly communicating when and how AI contributes to our outputs, reinforcing credibility and trust with clients, academics, and the public.

Transparency, guiding principles, and clear communication—not secrecy or blanket bans—are the safeguards that protect rigorous scholarship and sustained public confidence.

4. Approved AI Services and Roles (reviewed quarterly)

            Service          

            Primary Function          

            Typical Use Cases          

            Oversight Lead          

OpenAI GPT-4o

Large language model

Draught abstracts, outlines and literature scans

Research Director

DeepL

Neural machine translation

Rapid bilingual drafting, terminology checks

Publications Manager

DALL·E 3 / Midjourney

Generative imagery

Conceptual illustrations, social-media graphics (labelled “AI-generated”)

Design Lead

Sona Video

Generative video

Internal explainer prototypes, storyboard previews

Comms Director

Perplexity

AI-powered retrieval and synthesis

Horizon scans, bibliography building, first-pass Q&A

Assigned Researcher

Druide informatique, Canada

Sentence rephrasing and style refinement

Clarity and conciseness edits, tone harmonization

Publications Manager

Any new tool requires approval by the AI Ethics Lead before use.

5. Permitted Use Cases

5.1 Research assistance: Keyword mapping, outline creation, and cleaning up data.
5.2 Editing services: Proofreading, rewriting sentences, generating ideas for headlines, and translating text.
5.3 Visual and video concepts: Developing non-documentary visuals or brief videos for presentations.
5.4 Automation: Creating summaries of meetings, standardizing correspondence, and other repetitive tasks.

6. Data-Handling and Privacy Safeguards

6.1 All AI-assisted workflows are conducted through encrypted channels and stored on encrypted media controlled by the Ceylon Logica Institute and Consultancy.
6.2 Personal identifiers are removed or pseudonymized before any data is processed by external AI services to prevent re-identification.
6.3 Each prospective AI vendor undergoes a risk assessment—covering data-retention practices, model training on user prompts, geographic hosting, and EU adequacy—before approval.
6.4 The Information-Security team conducts annual penetration tests and ad hoc technical audits to ensure AI integrations introduce no new vulnerabilities.

7. Quality Assurance and Human Review

7.1 Every AI-assisted output must undergo the same peer review and editorial scrutiny as traditional content.
7.2 Teams keep a log of all AI inputs, including prompts, model versions, parameters, and human reviewers.
7.3 Researchers must trace every factual claim to human-verified primary sources; AI-generated or hallucinated citations are rejected.

8. Transparency and Attribution

8.1 Public reports and policy briefs include an acknowledgement identifying any AI tool that materially influenced the analysis or text.
8.2 Articles, op-eds, and blog posts disclose substantial AI contributions in a footnote naming the tool and describing its role.
8.3 Images and videos created with generative models carry a caption stating, “Created with [model]”, enabling audiences to distinguish AI-generated media from documentary content.

9. Regulatory and Legal Alignment

9.1 The Compliance Office conducts an annual self-assessment, mapping all AI use cases to EU AI-Act risk categories and documenting required compliance actions.
9.2 For each high-risk workflow, a Data-Protection Impact Assessment is completed and its mitigation measures are fully implemented before deployment.
9.3 A central registry tracks software licences and dataset permissions to ensure continuous respect for copyright, database rights, and other intellectual-property obligations.

10. Policy Review and Continuous Improvement

10.1 The AI Ethics Lead convenes a formal review twice each year and may trigger an interim review whenever legislation, organizational risk, or significant AI advances warrant.
10.2 All amendments are approved by the AI Oversight Committee, published on the website with a clear change log, and communicated to staff.
10.3 The policy is frequently revisited so that it remains aligned with rapid developments in the AI sector and accommodates new use cases that emerge as technology evolves.

Contact

For questions about this policy, please email ai-ethics@ceylonlogica.com.