EU AI Act Policy

Effective Date: November 16, 2025

This EU AI Act Policy explains how CreativeIQ / Growth UnLimited (“we”, “us”, “our”) uses artificial intelligence (“AI”) in our services and how we align our practices with the requirements and spirit of the EU Artificial Intelligence Act (EU AI Act) and related EU regulations.

This Policy applies to all AI-supported features connected to:

  • our websites and landing pages,
  • online courses and digital products,
  • emails, newsletters and free materials, and
  • any other online services operated by us.

It complements our Terms of Use, Privacy Policy (GDPR), Cookie Policy and Full Legal Disclaimer.

1. Our Role Under the EU AI Act

1.1 Deployer of AI Systems

We primarily act as a “deployer” of AI systems within the meaning of the EU AI Act – that is, we use AI tools and services provided by third parties (for example, AI text or voice generation systems) to support our educational content and customer communication.

1.2 No High-Risk or Prohibited AI Systems

We do not intentionally develop or deploy AI systems in areas classified as “high-risk” or “unacceptable risk” under the EU AI Act (for example, biometric identification, credit scoring, employment screening, or social scoring). Our AI use is focused on:

  • generating and refining educational texts,
  • producing or improving audio/voice content for courses,
  • supporting internal content workflows, and
  • limited, non-deterministic recommendations of content to users.

If in the future we consider using any AI in a field potentially classified as high-risk, we will perform a dedicated risk assessment and put in place all legally required safeguards before deployment.

2. Purpose and Types of AI Use

We use AI systems only for clearly defined, limited purposes, including:

Content creation and refinement

AI-assisted drafting and optimisation of texts, scripts, learning materials, and email content, always reviewed by humans before publication whenever the content is substantive.

Audio and voice production

AI-generated or AI-enhanced voice-overs or audio tracks for courses and materials, which are then curated and checked by us.

Support functions

Internal support such as drafting responses, structuring course content, or summarising information for our team.

Limited personalisation

Non-invasive, limited personalisation of content (for example, suggesting relevant course sections) based on general usage patterns, where such personalisation is legally permissible and consistent with our Privacy and Cookie Policies.

We do NOT use AI systems to:

  • make automated decisions that have legal or similarly significant effects on individuals (Art. 22 GDPR),
  • conduct social scoring, biometric categorisation, or emotion recognition for decision-making,
  • perform any form of covert manipulation or exploitation of vulnerabilities.

3. Risk Category and Transparency

3.1 Limited-Risk AI Systems

Our AI-supported features are designed as so-called “limited-risk” AI systems under the EU AI Act. These systems are generally permitted but subject to specific transparency and information requirements, especially when users interact directly with AI or when AI generates content that could be mistaken for human-created material.

3.2 Transparent Use of AI

Where we use AI in ways that may be relevant to users, we will:

  • clearly indicate in our Terms of Use, Privacy Policy, and this Policy that AI is used to generate or refine parts of our content,
  • inform users when they are interacting with AI-supported features (for example, in chat-like interactions, auto-generated suggestions, or AI-voice audio), unless it is obvious from the context,
  • label or describe AI-generated or heavily AI-assisted content appropriately where this is necessary to avoid confusion.

4. Human Oversight and Responsibility

4.1 Human-in-the-Loop

AI tools are used to support, not replace, human expertise. For content that informs, advises, or teaches:

  • humans remain responsible for the final selection, approval, and contextualisation of the content,
  • AI outputs are reviewed and, where needed, corrected or adapted before being integrated into our courses and materials.

4.2 No Fully Automated Decisions with Significant Effect

We do not use AI for fully automated decision-making that produces legal effects or similarly significant impacts on individuals (for example, automated denial of access, automated credit decisions, or automated employment screenings). Where any automation assists in routine processes (e.g., sending standard emails), humans remain accountable and can intervene.

5. Data Protection, Training Data and Logs

5.1 GDPR Alignment

All personal data processed in connection with AI use is handled in accordance with our Privacy Policy and the GDPR. This includes:

  • lawfulness, fairness, and transparency,
  • purpose limitation and data minimisation,
  • storage limitation and security,
  • respect of data subject rights.

5.2 Use of Third-Party AI Providers

Where we use third-party AI services, we:

  • select providers carefully,
  • ensure appropriate contractual safeguards, and
  • review their documentation on data protection and EU AI Act-related obligations.

We do not train our own high-risk AI systems using personal data of users. Where AI tools process text or other data you submit (for example, in forms or email replies), this is covered by our Privacy Policy and the provider’s own terms, and is limited to purposes necessary to provide our services.

5.3 Logging and Monitoring

Where AI tools generate output that affects the services we offer, we maintain appropriate logging and monitoring in order to:

  • detect technical problems,
  • correct erroneous or inappropriate outputs, and
  • maintain a record for accountability where necessary.

6. Content Quality, Bias and Limitations

6.1 Potential Errors and Bias

AI-generated or AI-assisted content can contain inaccuracies, outdated information, or biased formulations. While we aim to review and correct such content, we cannot guarantee that all AI-assisted output is error-free.

6.2 Mitigation Measures

We take reasonable steps to reduce risks of misleading or harmful content by:

  • human review of AI-generated content before publication where it may influence user decisions,
  • continuous improvement of prompts, review processes, and editorial guidelines,
  • avoiding AI use for highly sensitive personal recommendations (medical, legal, financial, or mental health decisions), as set out in our Full Legal Disclaimer.

7. Deepfakes, Synthetic Media and Manipulated Content

7.1 No Deceptive Deepfakes

We do not use AI to create deceptive deepfake content (for example, realistic but false representations of real persons or events) for the purpose of misleading users.

7.2 Labelling Synthetic Media

If we ever use AI to generate synthetic images, audio, or video that could reasonably be mistaken for real persons or real events, we will clearly label this content as AI-generated or AI-manipulated in an appropriate manner, in line with the transparency obligations of the EU AI Act for such content.

8. User Information and Rights

8.1 Information Rights

Users may request information on how AI is used in relation to their personal data within our services. We will provide:

  • a general description of the AI-supported functions relevant to them,
  • clarification of whether their personal data is processed by AI tools, and
  • information on applicable safeguards and rights under GDPR and the EU AI Act, to the extent relevant.

8.2 Objections and Complaints

Where processing is based on legitimate interests, users may object to certain types of processing, including AI-supported analytics or personalisation, in line with our Privacy Policy and GDPR.

Users may also raise concerns regarding our AI use with us directly via the contact details below or lodge a complaint with the competent data protection authority.

9. Governance, Compliance and Updates

9.1 Monitoring Legal Developments

The EU AI Act entered into force in 2024 with phased application of different obligations through 2025–2027. We will monitor developments, guidance and implementation timelines and adapt our internal procedures and this Policy as needed.

9.2 Internal Responsibilities

We designate internal responsibility for:

  • cataloguing and reviewing our AI use cases,
  • assessing whether a use case might fall under high-risk categories before deployment,
  • coordinating with data protection and information security requirements,
  • updating relevant documentation (Terms of Use, Privacy Policy, Cookie Policy and this AI Policy).

9.3 Voluntary Best Practices

Even where certain AI uses are only subject to limited transparency obligations, we strive to follow best practices for responsible AI:

  • clear communication to users,
  • human oversight,
  • proportional and minimally intrusive use,
  • consistency with our ethical and brand values.

10. Relationship to Other Policies

This EU AI Act Policy works together with the following documents, which govern your use of our services and our handling of your data:

  • Terms of Use
  • Privacy Policy (GDPR)
  • Cookie Policy
  • Full Legal Disclaimer

In the event of any conflict, legal rights under EU law (including the EU AI Act and GDPR) take precedence.

11. Contact

For questions about this EU AI Act Policy, our use of AI, or your rights in connection with AI-supported features, please contact:

CreativeIQ / Growth UnLimited
Dr. Thomas Steinert
Ostwender Straße 12
30161 Hannover
Germany

Email (AI, compliance, and data protection): ts-vs2@thomas-steinert.de

Scroll to Top