Understanding the EU AI Act's Framework

eu_ai_act_blog

The EU AI Act, which entered into force on August 1, 2024, establishes a broad legal framework for artificial intelligence across the European Union, designed to promote innovation while safeguarding safety and fundamental rights and mitigating potential risks. The Act follows a phased implementation approach that began February 2, 2025, with full implementation required by August 2027.
 
The Act creates a framework that categorizes AI systems into four distinct risk tiers (Unacceptable, High, Limited and Minimal). Each tier has specific requirements and obligations for developers and deployers. Certain uses of AI, such as those that manipulate individuals or exploit vulnerabilities, are banned outright.
 

Unacceptable Risk

(Banned)

Feb 2, 2025

  • Subliminal manipulation, social scoring, biometric categorization and health data
  • Immediate compliance required

High Risk

(Permitted with requirements)

Aug 2, 2026

  • Critical infrastructure, education, employment, law enforcement
  • Risk assessments, oversight, documentation
  • Compliance in 24 months, conformity before market entry

Limited Risk

(Transparency)

Aug 2, 2025

  • Chatbots, deepfakes
  • Users must be informed of AI + other regulations
  • Transparency in 12 months

Minimal Risk

(Limited regulation)

Aug 2, 2026

  • Spam filters, AI games
  • No obligations, voluntary codes of conduct

Similar to how GDPR established global privacy standards, the legislation's extraterritorial scope means it applies not only to EU-based companies but also to non-EU entities marketing AI systems in the EU or whose AI outputs are used within EU borders. This global reach makes compliance a priority for businesses worldwide, regardless of headquarters location. Non-compliance with the AI Act can result in significant penalties, including fines of up to EUR 35 million or 7% of global annual turnover.

 

35 M 15 M 2025

Maximum Fine

Standard Violation

Initial Enforcement

Or 7% of global annual revenue for deploying banned AI systems

Or 3% of annual global turnover for most other violations

First phase began February 2, 2025 with ban on unacceptable risk systems

 

Enabling AI Compliance Through Secure Data Handling using SendSafely 

As businesses integrate AI into their operations, managing sensitive data securely becomes paramount for regulatory compliance. SendSafely's end-to-end encryption technology addresses several key requirements of the EU AI Act, particularly when integrated with AI-driven customer engagement tools like Intercom's Fin AI agent.

 

End-to-End Encryption

Human-Centric Design

Transparency Features

SendSafely protects the sensitive data requested by AI systems during customer interactions with enterprise-grade encryption, both in transit and at rest. By default, AI systems have no access to the encrypted content and SendSafely allows entities to control when, where and by whom any subsequent decryption happens. This meets the data governance and security requirements implicit in the EU AI Act, especially for high-risk applications where data breaches will have significant consequences.

The EU AI Act emphasizes that AI systems must remain under human control. SendSafely's integration with Intercom's Fin AI Agent exemplifies this principle by enabling seamless transitions between automated AI interactions and human support when necessary, maintaining the human oversight that's central to the legislation.

 

With explicit notifications when sensitive information is being collected and clear disclosure of AI interactions, SendSafely's solution addresses the transparency requirements mandated by the Act, building trust with users while ensuring regulatory compliance. Detailed SendSafely audit logs help provide oversight of all data access and movement.

 

Practical Implementation: SendSafely's Integration with Intercom's Fin AI and other AI tools

SendSafely's integration with Intercom Fin demonstrates how businesses can balance AI innovation with regulatory compliance. The solution enables fully automated, secure collection of sensitive documents without human intervention while maintaining security, transparency, and oversight capabilities. The AI agents or chatbots don't see or collect your data, but instead serve up a SendSafely secure link to act as a blind third-party via end-to-end encryption.

 

Security by Design

The end-to-end encryption model ensures sensitive data exchanged during AI interactions remains protected, aligning with the Act's implicit security requirements for AI systems.

Transparent User Interactions

Users are explicitly informed when interacting with AI systems and when sensitive data is being collected, meeting transparency obligations required by the legislation.

Human Oversight

The integration allows for seamless transition from AI-driven interactions to human support agents when necessary, maintaining the human oversight principle emphasized in the EU AI Act. 

Auditable Compliance

Comprehensive audit trails document all secure file transfers and interactions, supporting the record-keeping requirements for AI system deployments.

 

As the global regulatory landscape for AI continues to evolve, with other jurisdictions likely to follow the EU's lead, solutions like SendSafely that combine security with compliance will become increasingly essential. By implementing such technologies, businesses can navigate the complex requirements of the EU AI Act while continuing to leverage the benefits of artificial intelligence in their operations and customer interactions.

For more information, contact success@sendsafely.com

 


 

 

SendSafely: Integrated File Transfer for the Apps you Love 

If you are looking for a secure way to send or receive files with anyone, or simply need a better way to transfer large files, our platform might be right for you.