It is no secret that the use of artificial intelligence (AI) and machine learning (ML) is growing throughout the technological and digital sector.
In the past two years alone, the development of innovative new data applications has taken a considerable number of steps forward, with many technology and data-driven start-ups relying on AI/ ML as the foundation of their applications.
AI is intelligence that is demonstrated by machines, as opposed to ‘natural’ intelligence displayed by living beings (including humans). Machine learning is a field of inquiry which is devoted to understanding and building methods that ‘learn.’
The significant benefits to business growth and technological progression are huge, but with this, comes a potential increase in data protection and privacy issues.
How can individuals or data subjects (the person associated with the personal data in question), be confident that their personal data is in safe hands? While decisions about how the data is processed will be made by the data controller, there is a risk that decisions that have an impact on the person may be reached before the way personal data is processed by AI/ ML (and without human intervention).
In other words, is it right to allow technology to make decisions about us without any sort of human intervention?
Interested in mastering the powerful tools of data science? Talk to our course adviser team about how Southampton Data Science Academy's six-week, 100% online courses can help you apply data science and AI techniques at work:
GDPR and individual rights
As with all applications, processes, services, and products that involve the handling and managing of personal data, businesses in the United Kingdom (UK) must adhere to the General Data Protection Regulation (GDPR).
Part of the UK Data Protection Act 2018, the ‘UK GDPR’ applies to most business and organisations. It acts as a set of principles to ensure that personal data is processed lawfully, transparently and for the specific reason it was collected.
Principles that are particularly pertinent:
Accountability and Governance
Lawfulness, fairness and transparency
Security and data minimisation
Ensuring individual rights
Failure to comply with these principles can land businesses in hot water with the Information Commissioner’s Office (ICO). While the media has focused on the risk of hefty fines, the risk of reputational damage might be greater.
The ICO has been paying close attention to the growth of AI and has offered guidelines and advice. One area of particular concern is ensuring that people have the same rights and freedoms when it comes to the management and processing of their personal data using AI/ML tools.
If you are choosing to use AI, there are additional measures you can take to ensure that you can still use it to help predict things or select individuals. For example, good marketing prospects based on patterns or trends. But you cannot use AI to make decisions without human intervention of some sort, before all the other requirements and obligations kick in.
When considering the use of AI, it is key that the basic data protection principles are considered. Individuals should have the right to access their personal data, correct or complete, have their data erased, have the processing of their data restricted or stopped, be able to obtain and reuse and object to the processing of their personal data.
In January 2020, the ICO joined forces with The Alan Turing Institute to collaboratively identify the risks and challenges businesses face with the fast growth of AI and ML. The outcome was a comprehensive and practical guidance which they published last year for consultation.
Below is a helpful link on what the ICOs guidance is on the use of AI and the considerations to data protection.
This post was written by the Data Protection Team at international education provider Cambridge Education Group.
Learn how to leverage data science and AI insights at work with Southampton Data Science Academy's six-week, 100% online courses: