
2024 promises to be a dynamic year. As we integrate artificial intelligence (AI) technology into businesses, the importance of responsible practices, vigilant oversight and continuous learning cannot be overstated.
The intersection of generative AI (GenAI) and data protection has emerged as a critical focal point for businesses, regulators and individuals alike. This shift is occurring in an Asean region that is still adapting to relatively young privacy regulations, such as in Thailand and Indonesia.
New skills in value creation and risk management are now essential to GenAI, which will underpin digital transformation. The rapid integration of AI technologies into workflows has ushered in a new era of challenges and opportunities, particularly in data privacy and security.
We foresee five significant GenAI trends for the year, each of which gives rise to the need for skilled input by appropriately qualified and experienced professionals.
1. Mainstreaming GenAI will impact data privacy, security and ethics
In 2024, GenAI will go mainstream, enhancing value creation across diverse areas. It is expected to bring the highest productivity boost as a virtual collaborator in generating marketing content, data analysis and information summarisation, providing code, enhancing customer service and assisting in business planning and human resources (HR) activities.
However, this also has implications for privacy, security and ethics. Businesses must tread carefully, especially in areas like customer service, where AI-powered chatbots may inadvertently expose sensitive information.
Developers using tools such as Github co-pilot to write code have to be vigilant against potential biases, data leaks from using confidential information for model training, and security vulnerabilities in AI-generated code.
As general users in the company make use of public tools for specialised queries and document analysis or creation, there is risk of leaking proprietary or personal data. Robust frameworks and vigilant oversight when utilising GenAI are thus crucial.
2. More stringent due diligence on GenAI apps
From industry leaders like OpenAI to startups using application programming interfaces (APIs) and existing apps that integrate GenAI functionalities, each software type poses unique challenges. Therefore, organisations must conduct thorough due diligence across three broad areas of apps:
- Core apps: Developed by pioneers such as OpenAI and Midjourney, who created proprietary foundation models and now drive innovation with relentless R&D. However, recent leakage of user conversations highlights the need for reliable governance alongside advancements.
- Clone apps: Startups and individual developers, funded by venture capital, use the API of core apps to create solutions for specific niches or industries. While they play a pivotal role in the democratisation and commercialisation of GenAI, our research has uncovered some with questionable privacy practices.
- Combination apps: Existing applications that have incorporated GenAI features, such as Microsoft Copilot, exposing non-savvy users to the technology.
Our research in August 2023 on 100 mobile clone apps using OpenAI's GPT APIs revealed significant discrepancies in declared data safety practices and actual behaviour, posing potential privacy risks. Another study covering 113 popular apps shows many GenAI apps falling short of the EU's General Data Protection Regulation (GDPR) and AI transparency standards.
Clearly, responsible AI and the development of governance protocols for AI applications by appropriately qualified and skilled business professionals is critical. Businesses must verify software providers' data handling policies, including privacy policies and terms of use, as well as local data protection laws in the user's country, to guarantee an app's trustworthiness.
3. Increased risks as content creation transitions to content generation
The transition from content creation to content generation is expected to create more privacy, security and ethics-related breaches, whether through malice, accident or ignorance in the use of GenAI. The same ease of content generation is also available to scammers and hackers, enabling them to commit traditional crimes using new techniques.
Synthetic content generation, while a boon for marketers and influencers, raises concerns about data privacy and intellectual property. Instances of identity theft using deepfakes and voice cloning are on the rise.
Even if content is generated legitimately, there are risks of humans being "out of the loop", where there is no supervision, fact checking or validation. Consider the case of an unchecked AI-generated poll speculating on the cause of a woman's death, appearing next to a Guardian article. The poll caused an uproar among readers and was taken down.
Content generation increases the reliance on prompts to fine-tune large language models like chatbots. While this offers exciting possibilities for creative expression and workflow automation, the ease of crafting prompts brings a new category of risks: adversarial prompts.
These include prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses) and jailbreaking (tweaking prompts to bypass AI system restrictions). Addressing these challenges is paramount in safeguarding responsible and secure development of content generation technologies.
4. Increasing involvement of privacy regulators will lead to more stringent enforcement on AI apps
Privacy regulators are poised to play a more active role in governing GenAI, especially with the European Data Protection Board and the European Data Protection Supervisor actively contributing to the EU AI Act, expected to pass this year.
In Singapore, the Personal Data Protection Commission has been actively involved in promoting the importance of AI governance and proposing advisory guidelines on the use of personal data in AI recommendation and decision systems.
Concurrently, existing privacy laws like the Personal Data Protection Act and the GDPR continue to play a crucial role, especially where personal data is processed by AI systems. These laws enforce principles such as consent, data minimisation and purpose limitation, ensuring compliance with data subject rights. Therefore, we expect to see more enforcement on AI applications.
5. Upskilling for data protection pro- fessionals
As GenAI welcomes multimodal capabilities, data protection officers and data governance professionals must acquire enhanced privacy management skills, learning to identify and mitigate privacy risks across different data types. Understanding operational aspects beyond traditional text analysis, such as image and video processing, voice recognition and other sensory data, is crucial for managing risk and compliance.
Kevin Shepherdson is the CEO and Founder of Straits Interactive Pte Ltd, a specialist in data privacy platform solutions and professional services in the Asean region.