Your Resources
Privacy issues you need to be aware of when using Artificial Intelligence…
The use of Artificial Intelligence tools has become increasingly common amongst businesses and organisations. AI is used for various purposes such as screening and editing documents, generating written or visual content, and analysing databases.
Privacy law and privacy obligations will still apply to the use of AI tools. If AI tools are used without considering the protection of private data, it may result in a privacy breach.
We outline below what to consider when it comes to privacy and AI.
Privacy Impact Assessment
The Privacy Commissioner recommends organisations conduct a Privacy Impact Assessment before using a new AI tool.
The PIA should include a description on how the AI tool works, the sources of data it uses, the sources of data it was trained on, and how relevant those data sources are for the organisation’s purposes.
Collection of personal information
Personal information must only be collected, where necessary, for a lawful purpose. Personal information must generally be collected from the individual concerned, who should also be advised on what information is being collected and how it will be used.
If organisations are feeding personal information into AI tools, they should consider whether this is consistent with the purpose for which the information was collected. If the personal information collected is being used to train AI, such as in developing a chatbot or automated phone line, individuals must be advised of this and provided an option to opt-out from their information being used for training purposes.
If personal information is being collected using an AI tool, care must be taken that the AI is not obtaining the information from illegitimate sources such as a data breach. As AI uses a variety of sources to obtain and collate information, it is not always clear where the information is being derived from, or whether it is accurate.
Security of personal information
Organisations have a responsibility to protect personal information against loss, unauthorised access, and other misuse.
Using AI tools can make personal information more vulnerable to security breaches especially where information is being shared with a third-party provider. AI tools may also make it easier for information to be collated into creating fake identities and to automate hacking campaigns.
Organisations must be well-informed about the particular risks of the AI tools they use and have the appropriate cybersecurity measures in place to avoid them.
Access and correction
Individuals are entitled to access personal information held about them from organisations. They are also entitled to ask the organisation to correct the information if it is inaccurate.
AI tools can make it difficult to comply with the access and correction rules because the AI tool may be trained using personal information which has become inaccurate. There may be no practical way to access and correct the original training data which the AI tool is still actively using.
Organisations must develop clear procedures to ensure their organisation appropriately responds to access and correction requests.
Accuracy
Organisations that hold personal information must not use or disclose personal information without taking reasonable steps to ensure the information is accurate, up-to-date, complete, relevant, and not misleading.
Generative AI tools will often produce inaccurate and misleading information, so it is important that organisations are aware of the risks and have procedures in place to ensure the accuracy of output data.
Overseas disclosure
Generally, personal information should not be disclosed outside of New Zealand. This rule will not apply to using offshore providers to store or process data, but it will apply where an AI tool is using information for its own purposes, beyond the purpose for which the personal information was collected in New Zealand.
Organisations should make sure individuals are informed about the use and storage of their personal information overseas. They must also make sure any information used overseas is protected and secure.
Unique identifiers
Many organisations choose to use unique identifiers (such as client numbers) to identify an individual and to assign their personal information.
Organisations must be aware that AI tools have the capacity to identify patterns in individual behaviour and assign their own unique identifiers to describe them, even where the organisation does not intend to use the AI tool in this way.
Additionally, a unique identifier used by one organisation should not be used in another in order to avoid the creating of profiles on individuals. However, there is a risk that AI will be able to obtain information from other datasets and various sources and match personal information together.
Organisations using AI to handle personal information must be careful that they continue to comply with their privacy obligations. It pays for organisations to obtain legal advice from a legal professional to avoid privacy breaches that may result in serious harm and liability.
Leading law firms committed to helping clients cost-effectively will have a range of fixed-price Initial Consultations to suit most people’s needs in quickly learning what their options are. At Rainey Collins we have an experienced team who can answer your questions and put you on the right track.






Top