Artificial intelligence (AI) Governance: experience from the eu and recommendation for Vietnam

09:34 - 14/08/2023

Tran Gia Hien

Vietnam Institute of Stratergy and Policy for Industry and Trade 

The Vietnamese Government is quickly making a transition to e-governance as well as promoting the establishment of smart cities, 5G technology and e-commerce. Hence, there is currently a dire need for comprehensive, strategic, calculated regulations and policies to advance the digital economy in order to guarantee growth and minimize potential risks. Among many legal documents raising the issue of digital economy development, there are still few discussions on AI governance and regulating AI-related activities. Therefore, based on digital economy development strategies, programs and related documents from the European Union, this article aims to provide more information on AI governance policies and models as well as possible lessons for Vietnam.

Keywords: AI governance; Digital economy; Legal framework, The European Union; Vietnam.

 1. Legal framework for developing artificial intelligence in Vietnam

The world has seen an incredibly fast and successful development of artificial intelligence and machine learning in recent years. Artificial intelligence has been so transformative as well as disruptive, even in big tech companies, being one of the main reasons behind the big layoff in tech companies in 2023. Companies such as Twitter, Meta (facebook), Google, Microsoft, Apple, Roku, Indeed, Amazon, Dell, HP, Cisco, IBM and SAP are all reducing their headcount globally in pursuit of investment in artificial intelligence. It can be argued that these companies would cut back on the number of employees to reduce cost, fix their problem of over-hiring and help the company with its financial goals. However, most of these layoffs were aiming at the HR department (which can easily be replaced by AI). In addition, it is reported that “companies such as Amazon have used AI to identify low-performing staff and then fire them” (Forbes, 2023).

In Vietnam, the government has promulgated some regulation to develop AI, notably the Directive No.16/CT-TTg on the strengthening of the ability to access the fourth industrial revolution, dated May 4th, 2017. This Directive aims at increasing competency for the country as it is going through the Fourth Industrial Revolution. There are certainly a wide variety of challenges for Vietnam in this phase, namely changing and developing the information technology industry, promoting research and development in data management and data analysis, using data to make informed and correct strategic decisions. There is also a need for a new intellectual property management system in the digital age as well as higher requirements in cyber security standards. The Directive No.16/CT-TTg identifies AI as a priority for investment and development. Indeed, investing in AI would majorly benefit Vietnam on its pursuit of efficient, competitive Industry 4.0.

Another regulation in favor of artificial intelligence development in Vietnam can be found in the Decision No.1269/QD-TTg on the Establishment of a National Innovation Center after looking at similar models around the world, with the goal of supporting and developing “the life of entrepreneurship, innovation contributing to the innovation of the growth model on the basis of science and technology development” (Do, 2020). In the process of growing the artificial intelligence industry, Vietnam can also take advantage of technological advances to enhance other crucial industries, such as health. For example, the Ministry of Health has planned to use AI in medicine, issuing “Decision 4888/QD-BYT dated October 18, 2019 approving the Project on application and development of smart medical information technology for the period of 2019-2025” (Do, 2020). The usage of AI in the health industry can potentially increase the average person’s living standards, contributing to a higher-quality, internationally recognized health system. Nonetheless, further legal frameworks need to be developed in order to resolve security risks, possible defective diagnosis, etc...

Furthermore, AI can be applied in many other different areas, including at judiciary institutions. During the COVID-19 pandemic, many organizations have begun to work online through applications such as Teams, Zoom Meeting or Google Hangouts. With the advent of artificial intelligence, even courts can be held online (online dispute resolution) in the case of an AI arbitrator/mediator. While a lot of work needs to be done on the legality of such practice, it is necessary for the government to start “building regulations to ensure effective regulation of AI activities” and complete the corresponding legal framework (Do, 2020).

2. AI governance in the European Union

Artificial intelligence has been rising recently, with the advent of ChatGPT and so many other tools for illustration, data analysis, art, research, finance, education... These tools are readily accessible more than ever: “Several generative AI models, including ChatGPT and an image generator called Stable Diffusion, can now be accessed online for free or for a low-cost subscription, which means people across the world can do everything from assemble a children’s book to produce computer code in just a few clicks” (Heilweil, 2023). Through machine learning and exposure to lots and lots of data, these generative AI systems can “train and eventually learns to mimic” (Heilweil, 2023). AI can certainly transform the way human live by making their lives easier, from automating simple tasks such as writing emails or even drafting legal documents (for example: contracts) (Heilweil, 2023).

While AI’s potentials and benefits cannot be disputed, they are created by tech companies that “want to improve their models and technology, and people playing around with trial versions of the software give these companies, in turn, even more training data” (Heilweil, 2023). Eventually, these tools would be sold to people for profit. Therefore, it is crucial that governments take early initiatives to regulate the AI sectors “from a strategic foresight perspective on how the market is expected to develop over the next 10-15 years” (Synodinou et al., p.233, 2021).

The European Union has taken many actions to start regulating artificial intelligence, such as publishing “Ethics Guidelines for trustworthy AI”, and launching “the White Paper on European approach to excellence and trust” that facilitates a synchronized development of AI across European countries to avoid fragmentation (Synodinou et al., p.238-239, 2021). A uniform development of AI in the EU, combined with “a European governance structure on AI in the form of a framework for cooperation of national competent authorities” would “allow avoiding fragmentation of responsibilities, increase capacity in Member States” (Synodinou et al., p. 239, 2021). In short, the EU’s governance plan for artificial intelligence depends on the fervent, timely participation from all of its member countries as well as “all consumer organizations and social partners, businesses, researchers, and civil society organizations” (Synodinou et al., p.240, 2021).

Recently in June 2023, the EU Parliament “passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology’s riskiest uses” (Satariano, 2023). While the final version of this law has not been passed yet, it contains rules that “are the first comprehensive regulations for AI” (Browne, 2023). This bill “takes a “risk-based” approach to regulating A.I., focusing on applications with the greatest potential for human harm” (Satariano, 2023). AI risks are categorized into three main types: First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated (The AI Act).

The Act also provides a neutral definition of Artificial Intelligence, which is: “a software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environment they interact with” (Sioli, 2021). Unacceptable risk such as social scoring will be prohibited, while high risk applications such as recruitment tools or medical devices would be “permitted subject to compliance with AI requirements and ex-ante conformity assessment” (Sioli, 2021). In addition, “AI with specific transparency obligations” (for example: impersonation bots) would be “permitted but subject to information/transparency obligations” (Sioli, 2021). Detailed requirements for high-risk AI are provided in-depth in Article III of the Proposal for an Artificial Intelligence Act (COM/2021/206). Notable obligations of high-risk AI providers are: Establishing and implementing “quality management system in its organization”; drawing-up and keeping “up to date technical documentation”; “Logging obligations to enable users to monitor the operation of the high-risk AI system”; undergoing “conformity assessment and potentially re-assessment of the system (in case of significant modifications)”; registering “AI system in EU database”; affixing “CE marking and sign declaration of conformity”; conducting “post-market monitoring” and collaborating “with market surveillance authorities” (Sioli, 2021).

This Act is deemed to aim at building “safeguards on the development and use of these technologies to ensure we have an innovation-friendly environment for these technologies such that society can benefit from them” (Brown, 2023). Truly, the European Parliament set out “to make sure that AI systems used in the EU are transparent, traceable, non-discriminatory and environmentally friendly” (EU AI Act, 2023). By having AI systems overseen by people instead of relying on automation, the EU can prevent potential “harmful outcomes” (EU AI Act, 2023). Policy objectives of the EU on AI can be summed up to four main goals: setting enabling conditions for AI development and uptake in the EU” (acquire data and enhance computing capabilities), making “the EU the right place; excellence from lab to the market”, “ensuring AI technologies work for the people” and building “strategic leadership in the sectors” (Sioli, 2021).

While the EU is much far-ahead comparing to other nations in terms of regulating artificial intelligence, it also has to face certain challenges. A highly controversial area of debate revolves around facial recognition: “The European Parliament voted to ban uses of live facial recognition, but questions remain about whether exemptions should be allowed for national security and other law enforcement purposes” (Satariano, 2023). Another area of debate is the use of biometric data on social media sites to build databases, with tech leaders stating that this would be hard to comply with while policy makers want to ban this practice entirely (Satariano, 2023).

3. Recommendation for Vietnam

The digital economy is being considered as one of the most important growth drivers for Vietnam in the next five decades to turn our country into a developed country. The document of the XIII Congress has determined, by 2025 econometrics accounts for about 20% of GDP (currently this proportion accounts for about 10%), by 2030 econometrics will account for about 30% of GDP. To achieve this goal, the digital economy must grow 3-4 times faster than GDP growth. Therefore, strongly promoting the national digital transformation, developing the digital economy and digital society to create a breakthrough in improving the productivity, quality, efficiency and competitiveness of the economy is a must.

Besides the opportunities and constructive environment for digital economy development, the digital economy development process in Vietnam is also facing shortcomings, limitations and challenges such as: State management, construction. The development and implementation of programs, legal framework and policies for the development of the digital economy still have many problems that need to be completed. The legal and institutional system is not really complete and synchronous for the development of the digital economy, in which, there are regulations for digital business activities, especially for the digital economy segment. The legal framework for piloting the application of new business models and services of the digital economy must be completed. The lack of legal tools and technology systems monitoring and managing the operation of digital platforms and cross-border platforms in cyberspace should be further accomplished.

Under this context, in terms of AI and the sharing economy, Vietnam must focus on research and development. Besides buildings centers to support the development and technological transfer of artificial intelligence, Vietnam should also build a network of institutes, research and development centers for AI (Ministry of Information and Communications, 2022). Vietnam should also focus on building more IT infrastructure, especially storage facilities as foundations for other rapidly developing technologies such as AI.  In addition, it is also pivotal to “pay attention to regulations, standards and processes on technology, information security, data sharing” and data protection when drafting policies related to AI applications AI (Ministry of Information and Communications, 2022). Indeed, the Vietnamese government should “develop policies to encourage domestic enterprises to experiment and deploy AI applications in production and business as well as survey and research successful deployment models in the world” in order to devise suitable policies for localities (Ministry of Information and Communications, 2022).

References:

  1. Browne, R. (2023). ‘EU lawmakers pass landmark artificial intelligence regulation’, CNBC, 14 June. Available at: https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html (Accessed: 2 August 2023).
  2. Do, L. K. (2020). ‘Legal framework for the digital economy in Vietnam’, PS-engage, 14 December. Available at: https://ps-engage.com/legal-framework-for-the-digital-economy-in-vietnam/ (Accessed: 6 July 2023).
  3. Heilweil, R. (2023). ‘What is generative AI, and why is it suddenly everywhere?’, Vox, 5 January. Available at: https://www.vox.com/recode/2023/1/5/23539055/generative-ai-chatgpt-stable-diffusion-lensa-dall-e (Accessed: 2 August 2023).
  4. Marr, B. (2023). ‘The real reasons for big tech layoffs at Google, Microsoft, Meta and Amazon’, 30 January. Available at: https://www.forbes.com/sites/bernardmarr/2023/01/30/the-real-reasons-for-big-tech-layoffs-at-google-microsoft-meta-and-amazon/?sh=3c6d0b572b67 (Accessed: 6 July 2023).
  5. Ministry of Information and Communications (2022). ‘Proposing legal frameworks and policies for artificial intelligence development’, 30 August. Available at: https://www.mic.gov.vn/mic_2020/Pages/TinTuc/153444/de-xuat-cac-khung-phap-ly-va-chinh-sach-uu-tien-cho-phat-trien-tri-tue-nhan-tao.html (Accessed: 2 August 2023).
  6. Satariano, A. (2023). ‘Europeans Take a Major Step Toward Regulating A.I.’, The New York Times, 14 June. Available at: https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html (Accessed: 2 August 2023).
  7. Sioli, L. (2021). ‘A European Strategy for Artificial Intelligence’ [Powerpoint presentation]. European Commission. Available at: https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf (Accessed: 2 August 2023).
  8. Synodinou, T. E.; Jougleux, P.; Markou, C. and Prastitou-Merdi, T. (2021). ‘EU Internet Law in the Digital Single Market’, New York: Springer.
  9. The AI Act. “The Artificial Intelligence Act”. Available at: https://artificialintelligenceact.eu/ (Accessed: 2 August 2023).