Ethical and legal questions regarding AI rights are becoming increasingly pertinent with the rapid advancement of this technology, especially as Vietnam accelerates its digital transformation and AI adoption.
Vietnam recognises AI as a key driver of economic growth, with its National AI Strategy aiming to place the country among ASEAN’s top four in AI research and application by 2030. However, rapid adoption also brings challenges, including ethical governance, regulatory gaps, and risks to human rights and social stability.
Addressing these issues requires proactive solutions to ensure AI development remains both responsible and sustainable. Dr James Kang, Senior Lecturer at RMIT Vietnam, highlights the importance of ethical AI governance and well-defined legal frameworks as key factors in navigating Vietnam’s evolving AI landscape.
AI rights refer to the ethical and legal entitlements that may be granted to AI systems as they advance. While AI has improved in language processing and decision-making, defining its rights remains complex and speculative. Understanding AI rights is key to shaping Vietnam’s approach. Should advanced AI have privacy or freedom rights? Could they claim these like humans? Addressing these questions will help define AI’s place in Vietnam’s ethical, legal, and societal frameworks.
A clear distinction between narrow AI, designed for specific tasks, and general AI, which mimics human cognition, is essential. While discussions on AI rights often focus on general AI, most ethical and legal concerns today relate to narrow AI in automation, decision-making, and data processing.
Ethical AI governance must ensure dignity, fairness, and autonomy. Without regulations, AI risks being exploited, manipulated, or causing unintended harm. AI-driven hiring, for example, may reinforce bias if trained on unbalanced data, deepening social inequities. Addressing these risks is crucial for fairness in Vietnam’s workforce.
Manipulation is another concern. AI can absorb harmful biases if repeatedly exposed to skewed inputs. Without safeguards, it may become a tool for misinformation or unethical agendas.
Vietnam’s legal framework, designed for human entities, struggles to manage AI’s complexities. If AI generates intellectual property, who owns it? The AI, its developer, or the company? While Vietnam lacks specific AI laws, Decree 13/2023/ND-CP on Personal Data Protection imposes strict privacy requirements, indirectly shaping AI applications. Meanwhile, the government is exploring policies on AI liability and transparency.
Vietnam’s legal system must adapt to keep pace. Assigning AI legal identity, similar to corporations, raises questions about liability and ownership. Could autonomous AI be held accountable for damages? What if AI operates without a clear owner? These challenges highlight the urgent need for updated AI regulations to ensure accountability and ethical oversight.
From a legal and ethical standpoint, Vietnam can strengthen its approach to AI development by building on existing efforts while addressing key challenges. Drawing inspiration from global models and tailoring them to local needs, Vietnam has the opportunity to lead in responsible AI integration.
Vietnam has already integrated AI education into its curriculum, with institutions like RMIT University offering specialised programs. Expanding these efforts through supportive policies, such as incentives for educational institutions and accessible online courses, ensures fair access to education and promotes equality. Such strategies align with AI-related laws and ethics to cultivate an AI-ready workforce.
To foster innovation, Vietnam can support startups by offering research grants and establishing tech incubators. Collaborations with international and local tech companies should prioritise transparency and the ethical use of AI technologies. A robust legal framework is necessary to protect rights, maintain public trust, and balance innovation with accountability.
Optimising the use of open-source AI can enable cost-effective development without requiring advanced hardware, though safeguards must address potential security concerns. Building domestic AI infrastructure and enhancing cloud computing capabilities can also strengthen Vietnam’s AI ecosystem while reducing reliance on foreign platforms and ensuring digital sovereignty.
Vietnam's legal regulations must evolve to address AI-specific challenges, such as liability for autonomous decisions, intellectual property rights for AI-created content, and accountability for unethical outcomes. Learning from frameworks like the EU AI Act and Singapore’s AI governance model, Vietnam can adopt tailored measures to ensure ethical AI deployment while considering its unique socio-economic context.
By proactively addressing these legal and ethical challenges, Vietnam can achieve a balanced approach to AI development, driving innovation while safeguarding fairness, inclusivity, and data security. With strong policies and collaborative efforts, Vietnam is well-positioned to build a robust and responsible AI landscape.
Story: Dr James Kang, Senior Lecturer in Computer Science, School of Science, Engineering & Technology, RMIT Vietnam
Masthead image: Parradee - stock.adobe.com
Thumbnail image: Parradee - stock.adobe.com
Ethical and legal questions regarding AI rights are becoming increasingly pertinent with the rapid advancement of this technology, especially as Vietnam accelerates its digital transformation and AI adoption.
Within the next decade, agentic AI could automate up to 70 percent of office tasks, reshaping industries with intelligent, adaptive systems that learn, evolve, and solve complex real-world challenges.
Quantum computing has long been regarded as the technology of the future, but recent breakthroughs by Google and Microsoft suggest the future could arrive sooner than expected.
As generative AI rapidly evolves, its potential to drive innovation is undeniable but so are its risks. From deepfake scandals to AI-driven misinformation, the technology poses serious ethical challenges.