Location : Home > 首页管理 > Seminars
Digital Rule of Law Forum|Weidong Ji, Changshan Ma, Weiqiu Long: Values Alignment, Ethics and Personal Information Protection of Artificial Intelligence
2023-08-04 [author] 海上法学院 preview:

Digital Rule of Law Forum|Weidong Ji, Changshan Ma, Weiqiu Long: Values Alignment, Ethics and Personal Information Protection of Artificial Intelligence

Recently, the "Digital Rule of Law Forum (Summer 2023)" was held at the China-Shanghai Cooperation Base of Shanghai University of Political Science and Law, focusing on "Legal Issues of Artificial Intelligence". Prof. Ji Weidong of Shanghai Jiaotong University, Prof. Ma Changshan of Digital Rule of Law Research Institute of East China University of Political Science and Law, and Prof. Long Weiqiu of Beijing University of Aeronautics and Astronautics, delivered keynote speeches for the forum. "Maritime Law School is authorized to summarize the keynote speeches of the three scholars, presenting their thoughts on the rule of law in digital development.

Guest Introduction:

Ji Weidong

Senior Professor of Liberal Arts and Doctoral Director of Shanghai Jiaotong University, Director of China Institute of Law and Society, Director of the Research Center for Artificial Intelligence Governance and Law, President of the Computational Law Branch of the China Computer Society, and Vice-President of the Research Society for Legal Education of the China Law Society.

Ma Changshan

Director of the Digital Rule of Law Research Institute of East China University of Political Science and Law, professor, doctoral director, editor-in-chief of the Journal of East China University of Political Science and Law, expert of the Ministry of Education's key areas of teaching resources, "Artificial Intelligence Ethical Knowledge Field Expert Collaboration Group", vice president of the China Jurisprudence Research Society.

Long Weiqiu

Dean of the School of Law of Beijing University of Aeronautics and Astronautics, professor and doctoral director, vice president of China Network and Information Law Research Society, executive director of China Civil Law Research Society, executive director of China Space Law Society, vice president of China Behavioral Law Society.

How Artificial Intelligence Aligns with Human Values

Ji Weidong

Open AI has attracted attention after the official release of ChatGPT on November 30th last year, and its powerful features have even made programmers worry about losing their jobs, and also brought many new questions. For example, generative AI may get out of control if it has an autonomous consciousness, should it be developed or further strengthened to prevent risks? It has not yet reached the height of human intelligence, and just the language big model itself has changed the operating system of human civilization, bringing the problem of imbalance in the relationship between man and machine.

There is another big problem: information distortion. Artificial intelligence sometimes answers in a serious manner like a judge, which seems to have a lot of basis, but in fact the problem of false information is very serious. When all human data material is exhausted by AI learning, much of what we face in the future will be AI-generated, and its authenticity will require a question mark.

There will also be many problems with the law, which will become even more prominent when the strengthening of individual autonomy due to blockchain is combined with the empowerment of big models.

China now has more than 70 big models, of which more than 30 are very active. The U.S. also has a similarly large number of models, which pose a very obvious challenge to national governance, how to respond to deal with it?Open AI researchers in 2012 raised the issue of value alignment, if you are worried about AI getting out of control, you must make AI value orientations consistent with human value orientations. But the world's civilized countries value orientation is already different, how to align, according to what standard alignment, if the alignment can not be?

Recently, three researchers in the United States published a very interesting article, to get rid of the Rawlsian "curtain of ignorance" to ensure that the AI algorithm is fair and value alignment. They did some experiments, such as assigning AI assistants to different roles and letting them make choices. When we tell the AI decision-making system all the information, this time its decision is biased towards benefits. However, according to Rawls' concept of "curtain of ignorance", when all information is covered up and the AI is allowed to make a decision, its decision is more in favor of justice rather than benefits. Therefore, the concept of "curtain of ignorance" and Rawls' theory are very informative for AI governance and value alignment. Especially in the case of different international values, it is difficult to harmonize substantive standards, and it is an appropriate solution to reach a basic consensus on ethical norms and principles of justice through such a purely procedural approach.

"People-oriented": the first principle of AI rule of law ethics

Ma Changshan

The rule of law ethics of artificial intelligence is rich in connotation and broad in scope, but the first principle should be "people-oriented".

First of all, we oppose the theory of "technological supremacy". Nowadays, there is an argument that technology has its own independent logic and can solve problems that cannot be solved by human beings, which is a kind of "technological salvation". It opposes anthropocentrism, which will lead to human disasters from the perspective of artificial intelligence development. Specifically, everything is determined by technology, and technology will become a way to control human beings, which will bring about serious erosion of human nature. At present, the international community's consensus is that the development of technology must serve human beings and enhance their well-being, rather than becoming a means of controlling them more effectively. If it results in the digitalization of power and the digitization of power, such a society will find it hard to accept. The development of digital technology should benefit all people, at least a fair share of the digital dividend, to realize the real "common construction, common governance and sharing", overemphasis on the supremacy of technology is not our goal.

Second, follow the principle of technology for good. The development of technology should respect the personality, conform to human nature, be credible and good, and be committed to scientific and technological universality, support the weak, and be safe and responsible. We often encounter some "rogue" software, such as constantly changing the virtual number of harassing phone calls, advertising pop-ups can not be closed or click on the display to close the "x" happens to be open, reply "unsubscribe! " happens to be "subscribe" and so on; and then the "ticket software", a second to grab 100 times, people and then fast hands, a second to grab but three times, which is equal to a second in front of you have 97 people in line. Expanded to say, during the epidemic, random assignment of red code, is also a violation of the technology to the good requirements.

Again, the development of technology should be reasonable and compliant. Existing laws, regulations, rules and normative documents for the concept of data and information and their rights, is not very clear. The Data Security Law defines "data", but the current law does not define "information", only "personal information". In fact, the relationship between data and information is a matter of practice, not theory. Data and information are transformed in different application scenarios. For example, if you stay in a hotel, you need to provide your ID card, your cell phone number or even your face in some hotels, all of which are sensitive personal information that becomes corporate data after you enter it. If the Bureau of Culture and Tourism requires hotels to provide data for research on the tourism market, the data will become information. Under what circumstances how to utilize, as well as the relevant government departments to share each other and other issues need to be followed, to achieve reasonable compliance.

Personal Information Protection in the Development of Artificial Intelligence

Long Weiqiu

Artificial intelligence has reached the stage of superintelligence, which brings a lot of unexpected risks, such as superintelligence has a greater need for data. Where does this data come from? If the data is grabbed without sufficient regulation, it will inevitably violate the data rights of others.

Although the developers now claim that the data crawled is public data, in fact some of the crawled data already contains a large amount of personal information, even identifiable and sensitive personal information, such as names and phone numbers, and usually without obtaining the consent of the data right holder in advance.

The risk of personal information leakage without consent to capture and use has been some cases, in March this year, Samsung's DS department found that after the use of ChatGPT after the occurrence of 20 incidents of leakage of confidentiality. Italy also discovered in March the loss of ChatGPT platform users' conversation data and payment service payment information. The U.S. reported similar issues at Apple, which has now restricted the use of ChatGPT within the company.In May, the Congressional Research Service released its Deep AI and Data report, focusing on data and personal information protection.In June, the first class action lawsuit was filed in a California court against Open AI, alleging that Open AI, in the development of marketing and operations of its superintelligence products, illegally collected and used information about hundreds of millions of Internet users, including children.

In June, the European Parliament voted unanimously to adopt a draft authorization for an Artificial Intelligence Bill, which could be the first AI law. It adds more safety controls to ChatGPT and develops a regulatory regime based on risk to balance the innovative development of AI with safety norms.

How does China respond? At the beginning of this year, the Office of the Internet Information Office issued the "Internet information service depth synthesis management regulations (draft)", which put forward requirements for the protection of personal information, and if the service is provided with generative intelligence AI, the use of personal information needs to seek the consent of the individual. on April 10, China's Association of Payment Clearing said that the ChatGPT tool has attracted widespread attention from all sides, and there are already some employees of enterprises who are using the tool to carry out work In order to effectively deal with risks, protect customer privacy, maintain data security, and enhance the level of data security management in the payment clearing industry, the Payment Clearing Association has advocated the prudent use of ChatGPT in accordance with the Cybersecurity Law, the Data Security Law, and other legal initiatives.For the time being, China's legislation is seriously lagging behind, and it should immediately proceed to revise and increase the norms at the legal level, and if an AI law is not enacted, a relevant ordinance should be formulated, as well. If not an AI law, regulations should be formulated, or this part can be added to other laws.