As we enter a new era defined by artificial intelligence and machine learning, the very underpinnings of many modern technologies are being put under the microscope by policy makers. This foundation? Data.
Data is necessary to refine most cutting-edge technologies, and it will only become more important in the future as we develop more sophisticated AI and ML models, powered by richer datasets and best quality.
However, there are strict regulations on how data can be used, especially within the EU. The EU’s General Data Protection Regulation (GDPR) requires companies to need consent to store personal data to safeguard the privacy of their citizens online and offline.
However, these regulations affect AI companies around the world, as they limit how data is transferred off-block – to servers in the United States, for example. Do you rely on cloud services in your technology stack? This means that the rules probably affect you too.
However, complying with these regulations is not straightforward. EU courts themselves have repeatedly questioned the legality of the EU-US data transfer framework that is supposed to comply with GDPR, and a new solution is still a long way off. found. It is therefore essential that all AI companies, everywhere, monitor the situation closely.
Privacy Shield in tatters
The main problem for international software companies is the transfer of data from the EU to the United States and vice versa. The US is the EU’s second largest trading partner and many of the world’s biggest tech companies are based there. Therefore, using their services with America-based data centers often inevitably requires moving data across borders.
This was governed, for some time, by the EU-US Privacy Shield framework agreed between the two powers. Then, in 2020, this was overturned by the European Court of Justice itself, on the grounds that US national security laws risked infringing the privacy of EU citizens’ data.
Since the ECJ issued this verdict, data transfers have been permitted using “standard contractual clauses”, but it hasn’t turned out to be much simpler. Amazon and Meta have been embroiled in legal disputes over the operation of their algorithms within the confines of EU law. In fact, Meta even threatened to withdraw from Europe altogether to avoid being sued against the EU. Meanwhile, the Austrian data protection authority has ruled that even using Google Analytics to monitor your website traffic numbers is illegal under GDPR, setting a precedent for other EU countries to follow. EU.
This leaves companies in a bind. Even if you don’t want to do business in Europe, that doesn’t mean you can avoid the hand of EU law. Suppose a company uses a dataset containing a Spanish respondent to develop a new product; the entire production process could collapse if said company does not adhere to the GDPR. But how do you do that?
Help is at hand – eventually
Fortunately, businesses may not be stuck in this legal quagmire forever. Hope arrived in the form of the recently announced Transatlantic Data Privacy Framework, the result of more than a year of behind-the-scenes negotiations and diplomacy, but still a long way from adoption.
According to the joint statement issued at the time, it “will provide a sustainable basis for transatlantic data flows, which are essential to protect citizens’ rights and enable transatlantic trade in all sectors of the economy, including for small and medium-sized enterprises. companies.”
Until everything is settled, companies will be left without clarity for many months, making it harder for them to plan for the future, so seek advice and check every process that could be at risk. What data sources do you use? What cloud services do you rely on? Where are all your customers based? As long as this gray area of international law remains, it is essential to be tight on the answers to all these questions.
Imminent AI legislation
And there’s an even more compelling reason for AI companies to put themselves above EU law now… there’s more to come. Following the regulatory framework proposal presented last year, the EU said that a new regulation on AI (entirely separate from the GDPR) “could come into force in the second half of 2022 in a transition period”.
The proposed regulations would take a risk-based approach. In some high-risk cases – such as services that underpin critical infrastructure – AI systems will be subject to strict obligations before they can be used in the EU. These could include human oversight measures to ensure transparency, risk assessment protocols and detailed compliance procedures. Violating them would result in substantial fines.
The decisions that AI-powered systems can make, especially in healthcare, have the ability to change lives. It is therefore crucial that companies in this category remain vigilant, ensure that processes are transparent and focus on where data is stored or transferred.
In the meantime, companies must be proactive in their approach to data to avoid breaking EU laws. Every business wants to grow and at some point that means working in or with the EU and its vast population. And no one wants to foot the bill for an avoidable fine, especially when you prefer to focus on innovation and expansion as an organization. It is absolutely imperative that companies tighten up their operations, verify that their datasets comply with EU laws and prepare for new regulations.