Smart AI for your business: Here’s how to use AI safely and within the law

AI is the buzzword of the moment, and that’s not likely to change anytime soon. How can you get the most out of AI tools within your company, safely and in compliance with the law?

Profile picture of Mark Vletter
Mark Vletter
18 December 2024
Clock 5 min

AI is the buzzword of 2024 and that won’t change in the coming years. But how do you prevent your data from being used to train that AI? How do you prevent customer or colleague data from leaking via AI tools? And how do you make the most of AI tools in your company, but do so safely and within the law?

In this article, we take a closer look at how to use AI responsibly and safely within your organization.

Why secure AI is relevant now

For many companies, the GDPR has had a major impact on how they handle customer and colleague information. These regulations protect personal data of European citizens and require companies to process data in a transparent, secure manner. The AI Act, which will be gradually implemented from 2025, goes even further and places specific requirements on the development and use of AI systems. This means companies must consider ethics, transparency and security in their AI solutions. Whether they use them alone or develop them themselves.

And while the rules are meant to protect your business and your customers, they also seem complex and therefore deterring. That’s why I give you six practical steps to get started with AI tools safely and according to the law.

1. Understand the principles behind the legislation

AI legislation is built on four ethical principles:

  • Respect for human autonomy: It’s important that AI systems demonstrate respect for human autonomy by supporting and not undermining individual choices. They should avoid making decisions that interfere with this autonomy, ensuring that humans maintain ultimate oversight and decision-making authority.
  • Preventing harm: AI should not cause physical, mental or social harm. The system must be safe and robust, with special attention to people in vulnerable positions and situations where power differences are at play. The environmental impact of AI should also be considered.
  • Justice: AI should be developed and applied in a fair and equitable manner, free from prejudice or discrimination. Decisions made by AI systems must be transparent and traceable, with clear accountability to a responsible party.
  • Explainability and transparency: Transparency in AI processes is essential for trust. The goals and capabilities of AI should be clearly communicated, and decisions should be understandable and explainable.


Tip: Make sure the tools you use and the way you deploy them fit within these ethical principles.

2. Understand the risks of AI applications within your company

AI systems fall into different risk categories, from “low” to “high risk,” depending on their impact on privacy, security and autonomy of users. High-risk systems, such as facial recognition or credit ratings, have strict requirements, such as transparency about how decisions were made. The rules for low-risk systems are simpler. Learn more about the four categories on our AI ethics page.

Tip: Conduct an internal assessment of your AI tools to determine which risk category they fall into. This will help you understand what regulations apply and what security measures are needed. The EU has a handy AI Act Compliance Checker you can use. At Voys, we have our own AI Assessment GPT. You can ask this to check if your AI idea or tool is safe to use. It gives you a Compliance Score, shows you what works well, where you can improve, and gives you some friendly tips.

3. Putting privacy and data security front and center

The GDPR sets strict rules for the collection and use of personal data. When using AI tools for data analysis, for example for customer interactions or personalized marketing, you need to know exactly how the data is processed. At Voys, we go one step further: data from customers and colleagues simply cannot be used in AI tools without approval from our legal and security team. Also be aware that tools that previously did not have AI features may now have it. So an extra check is necessary if an existing tool adds AI functionality.

Practical advice: Avoid collecting unnecessary data and use techniques such as data minimization and pseudonymization. Discuss with your team how to perform data analysis without tracking individual customers. And build a registry of the tools you use and the data that may reside in those tools.

4. Create transparency in AI decisions

One of the biggest concerns with AI is the so-called “black box”: decisions are made without understanding the reasons behind them. That’s why the AI Act wants companies to disclose how their AI systems work, especially if the decisions impact users’ lives.

How do you implement this? 

If you use AI for customer segmentation or fraud detection, document how the system makes decisions. This will help you to both reassure customers and meet regulatory requirements.

Users need to know that their data is safe and what it is being used for. This is not only required by law, but also crucial for customer trust. The GDPR requires companies to get consent from customers before their data is used for AI analytics.

Concrete example 

Ask explicit permission when collecting data and be clear about how that data will be used. For example, when registering on your website or through forms, allow customers to choose specific AI analytics that improve their experience.

6. Have a clear AI policy

Stringent laws and regulations can deter and make companies and colleagues reluctant to use AI tools. This is even though many AI tools offer tremendous benefits. It is therefore essential to have a clear AI policy that clarifies what is possible and how to do it safely. Make sure this policy does not remain too general, but rather is concrete and applicable.

Voys is a good example here: we use AI tools in our daily work, develop AI tools ourselves and provide third-party AI tools to customers. All application areas are explicitly addressed in our AI ethics and usage policy.

Extra tip: Share your AI policy with your team and make regular time to review together to make sure the policy remains up-to-date and applicable. That way, everyone in the company knows how to handle AI safely and responsibly. Our AI ethics policy is not perfect and is constantly evolving, but can serve as a source of inspiration.

Conclusion: Innovating safely with AI

It is a misconception that strict laws stop innovation. Regulations such as the GDPR and the AI Act encourage companies to think about ethical uses of technology. They require companies to think carefully about the long-term impact of their solutions and prevent companies from taking risks that harm their customers and violate their privacy.

With a few simple steps, you can deploy AI safely and responsibly within your organization. By putting transparency, ethical considerations and data security at the center, you not only work within the law, but you also strengthen your customers’ trust. That trust is the foundation for success in an increasingly AI-driven world.

More stories to read

On our blog we post about a lot of stuff, just go for it and read some posts for your own fun.

Go to the blog
Sharing information with colleagues: 5 tips for rock-solid internal communication

from 13 September 2024

Sharing information with colleagues: 5 tips for rock-solid internal communication

Read more Arrow right
On-premise AI: safeguarding privacy in voicemail transcriptions

from 12 September 2024

On-premise AI: safeguarding privacy in voicemail transcriptions

Read more Arrow right