Organizing AI around the world, from China to Brazil
[ad_1]
Pew Research Center released it last week poll A majority of Americans — 52 percent — said they feel more concerned than enthusiastic about the growing use of artificial intelligence, including concerns about personal privacy and human control over new technologies.
The proliferation of generative AI models such as ChatGPT, Bard, and Bing, which are all publicly available, has brought AI to the fore. Now, governments from China to Brazil to Israel are also trying to figure out how to harness the transformative power of AI, while reining in its worst excesses and crafting rules for its use in everyday life.
Some countries, including Israel and Japan, have responded to their very rapid growth by clarifying existing data and protecting privacy and copyright—in both cases, paving the way for the use of copyrighted content to train AI. Other countries, such as the UAE, have issued vague and sweeping statements about AI strategy, or launched working groups on AI best practices, and published draft legislation for public review and deliberation.
Still others are taking a wait-and-see approach, even as industry leaders including OpenAI, the founder of viral chatbot ChatGPT, urge international cooperation on regulation and inspection. in a permit In May, the company’s CEO and two co-founders warned of the “potential existential risk” associated with a superintelligence, a hypothetical entity whose intelligence exceeds human cognitive performance.
“Stopping it would require something like a global monitoring system, and even that is not guaranteed to work,” the statement said.
However, there are only a few concrete laws around the world that specifically target the regulation of AI. Here are some of the ways in which legislators in various countries are trying to address questions surrounding its use.
Brazil has an artificial intelligence bill, the culmination of three years of proposed (and stalled) bills on the subject. The document — which was released late last year as part of a 900-page Senate committee report on artificial intelligence — narrowly defines the rights of users who interact with AI systems and provides guidelines for classifying different types of AI based on the risks they pose to society.
The law’s focus on users’ rights places the onus on AI service providers to provide information about their AI products to users. Users have the right to know that they are interacting with the AI, but they also have the right to get an explanation about how the AI made a particular decision or recommendation. Users may also object to AI decisions or request human intervention, especially if the AI decision is likely to have a significant impact on the user, such as systems related to self-driving cars, employment, credit assessment, or biometric identification.
AI developers are also required to conduct risk assessments before bringing an AI product to market. The highest risk rating indicates any AI systems that deploy “subliminal” technologies or exploit users in ways that are detrimental to their health or safety; This is strictly prohibited. The AI bill also identifies potential “high-risk” AI applications, including AI used in health care, biometric identification and credit scoring, among others. Risk assessments of “high-risk” AI products are to be published in a government database.
All AI developers are liable for damages caused by their own AI systems, although developers of high-risk products are subject to a higher standard of liability.
China has published a draft regulation on generative AI and it is Solicit public input on the new rules. But unlike most others, the Chinese draft notes that generative AI should reflect “socialist core values”.
In its current version, the draft regulations state that developers “take responsibility” for the output generated by their AI, according to translation From the document prepared by Stanford University’s DigiChina Project. There are also limitations to sourcing the training data; Developers are liable if their training data infringes someone else’s intellectual property. The regulation also states that AI services must be designed to generate only “true and accurate” content.
These proposed rules build on existing legislation on deepfakes, recommendation algorithms, and data security, giving China an edge over other countries that are crafting new laws from scratch. The country’s Internet regulator also announced Limitations of facial recognition Technology in August.
China has set dramatic goals for its technology and AI industries: In the “Next Generation AI Development Plan,” an ambitious 2017 document published by the Chinese government, the authors wrote that by 2030, “China’s AI theories, technologies, and applications should achieve leading levels.” worldwide.”
In June, the European Parliament voted to approve what it called the “Artificial Intelligence Act”. Similar to the Brazilian draft legislation, the AI law classifies AI in three ways: unacceptable, high risk, and limited.
The AI systems that are considered unacceptable are those that are considered a “threat” to society. (The European Parliament gives “voice-activated toys that encourage risky behavior in children” as an example.) These types of systems are prohibited by the AI Act. High-risk AI needs approval from European officials before it can be brought to market, as well as throughout the product life cycle. These products include AI products related to law enforcement, border management and employment screening, among others.
AI systems considered to have limited risk should be appropriately labeled for users to make informed decisions about their interactions with AI. Otherwise, these products mostly avoid regulatory scrutiny.
The law still needs to be approved by the European Council, although parliamentary lawmakers hope to wrap up the process later this year.
In 2022, the Israeli Ministry of Innovation, Science and Technology published a policy draft on the regulation of artificial intelligence. The document’s authors describe it as “the ethical and business-oriented compass for any company, organization or government agency working in the field of AI,” and emphasize its focus on “responsible innovation.”
The draft Israeli policy says the development and use of AI must respect “the rule of law, fundamental rights and public interests, and in particular, (preserve) human dignity and privacy”. Elsewhere, it vaguely states that “reasonable measures shall be taken in accordance with accepted professional concepts” to ensure that AI products are safe to use.
More broadly, the draft policy encourages self-regulation and a “soft” approach to dealing with government interference in AI development. Rather than proposing unified industry-wide legislation, the document encourages sector-specific regulators to consider tailored interventions when appropriate, and encourages government to try to align with global best practices in AI.
In March, Italy briefly banned ChatGPT, citing concerns about how and how much user data is collected by the chatbot.
Since then, Italy has committed nearly $33 million to support workers at risk of being left behind due to digital transformation – including but not limited to artificial intelligence. About a third of that amount will be used to train workers whose jobs may become obsolete due to automation. The remaining money will be directed towards teaching digital skills to people who are unemployed or economically inactive, in the hope of stimulating their entry into the labor market.
Japan, like Israel, has adopted a “soft law” approach to regulating AI: the country has no regulations governing the specific ways in which AI can or cannot be used. Instead, Japan has chosen to wait and see how AI develops, citing a desire to avoid stifling innovation.
For now, AI developers in Japan have had to rely on neighboring laws — such as those relating to data protection — as guidelines. For example, in 2018, Japanese lawmakers revised the country’s copyright law, allowing copyrighted content to be used for data analysis. Since then, lawmakers have made it clear that the revision also applies to AI training data, paving the way for AI companies to train their algorithms on other companies’ intellectual property. (Israel occupied Same approach.)
Regulation is not at the forefront of each country’s approach to AI.
in the United Arab Emirates National strategy for artificial intelligenceFor example, the country’s regulatory ambitions are given in only a few paragraphs. In short, the AI and Blockchain Council will “review national approaches to dealing with issues such as data management, ethics, and cybersecurity,” monitoring and incorporating global best practices in AI.
The rest of the 46-page document is dedicated to encouraging the development of AI in the UAE by attracting AI talent and integrating the technology into key sectors such as energy, tourism and healthcare. This strategy, as the document’s executive summary boasts, is in line with the UAE’s efforts to become “the best country in the world by 2071”.
Source link