International perspectives on Responsible AI
1 Altimet Security, India
Abstract
According to the world economic forum, AI is expected to add $15 trillion to the world’s economy by 2030. In particular, Gen AI could add $60-$110 billion annually in the pharma and healthcare industry. The huge productivity gains and economic growth potential are driving the adoption of AI worldwide. At the same time, it also increases the need for a responsible AI framework to ensure AI is ethical, transparent, trustworthy and reliable.
Keywords
AI Governance, Ethics, Transparency, Trustworthiness
1. Introduction
Generative Artificial Intelligence could add $200 to $340 billion to the banking industry. Whereas, in the ecommerce and retail sectors, it could add $240 to $390 billion. It can improve the productivity of employees by 66%. Overall, Gen AI is expected to disrupt the entire public and private landscape. With so much potential, Gen AI usage is expected to witness robust growth. Along with productivity, Gen AI also introduces a lot of security risks. With so much potential in these tools, it is our duty to ensure that AI is used in a safe manner. Responsible AI has 3 core philosophies – safety, reliability, explainability.
Countries and regions are culturally, politically, ethnically, linguistically diverse. Hence, the approach to governance and ethical frameworks across regions is varied and different. When it comes to fair, transparent and ethical usage of AI, countries have different perspectives. This paper aims to explore these differences, and deep dive the policies to find out how they affect global cooperation.
2. AI adoption across regions
68% of large firms in the European Union have already adopted AI. And the EU is a pioneer in AI Governance. They have stringent regulations like the AI Act and GDPR. Whereas in the US, a survey says that only 30% of organizations are confident of AI regulations and compliance. This conveys a clear regulatory gap between the EU and the US.
In developing regions like Africa and Latin America, AI adoption is low with less than 10% firms having significant usage. This clearly shows that there is a disparity in AI adoption across different regions. Hence, there is a need for international cooperation to bridge such huge gaps across geographies in AI adoption.
3. AI Governance models and diverse frameworks:
3.1. Europe
The European Union leads in this space by launching the EU AI Act. It has a comprehensive approach where it classifies AI systems into risk categories with specific focus on user rights, transparency, and accountability. AI systems should also align with human rights principles. Data privacy has always remained critical in Europe with the General Data Protection Regulation [GDPR] which has a list of rules for data collection & data processing.
EU AI Act focuses a lot on human centric AI usage as the rights, and dignity of individuals are given the most importance. It also emphasizes the importance of explainability and reliability so that there is transparency in the decision making processes of AI resulting in AI usage without any bias or discrimination. In highly regulated industries like banking, healthcare, finance and government, safety becomes even more important.
3.2. United States
While Europe has a comprehensive approach, the US follows a decentralized approach. Even tech companies take the lead in developing ethical frameworks. This emphasizes more on innovation and entrepreneurship with the government playing a lesser role in regulating technology.
Tech industry drives debates around ethics, transparency and safety. Transparency plays a vital role in regulated industries like banking, healthcare but the overall regulations lag behind Europe. Recently, they approved the California AI act. So, corporate responsibility and innovation are prioritized with tensions leading to privacy concerns, labor issues, and the critical need for technological progress.
3.3. China
AI Governance in China is focused on state control with prioritization for national security and economic growth. The Government is actively involved in AI development, with specific focus on areas like surveillance and military AI. China’s AI governance model is characterized by state control and the prioritization of national security and economic growth. The Chinese government actively shapes AI development, particularly in areas like surveillance and military AI, where the focus is on maintaining societal stability and advancing strategic interests.
3.4. Emerging economies (India, Brazil, Africa, etc..)
In emerging economies, AI Governance is at a nascent stage. India’s AI policies are getting written with focus on economic growth, jobs. AI has to be inclusive with societal impact and benefits for the large population.
Given the disparities, fairness is very important in access to AI. As the emerging economies already have inequalities, disparity in access to technology could widen the gap further. In sectors like healthcare, agriculture, the need for explainability and trustworthiness become vital as AI can have a significant impact on various basic services served to people in these countries.
3.5. Singapore
Singapore had an AI governance framework in 2020. They recently launched V2.0 in 2024 which has addressed a lot of practical issues on privacy, data governance, fairness and ethics. They promote responsible AI development especially in healthcare and urban solutions.
Hence, it clearly shows that AI governance is not unified across regions. This also highlights that different regions have different policies based on a certain context that is unique for their region.
4. Regional AI policies and practices:
AI policies are implemented differently across different regions. Countries have different approaches to AI Governance. Global cooperation is very significant when it comes to AI policies and regulations.
4.1. Europe:
Europe remains a pioneer and has set standards for responsible AI. Through the recent AI
Act, this union wants to ensure AI is used in a safe, ethical and responsible manner. While Europe has a heavy emphasis on regulation, other regions like the US or China seem to have looser controls. As Europe is focused on collaborative governance, and Global Partnership on AI, it should reduce the governance gap across regions.
4.2. US (innovation vs regulation):
In the US, the focus is more on innovation while regulation takes a back seat. The tech industry drives Gen AI usage and productivity, whereas ethical concerns around safety, privacy, bias still persist. With less stringent regulations, it can lead to slow global cooperation especially with Europe. But the US continues to lead in innovation and hence, it gives them an edge.
4.3. China:
When it comes to AI Governance, China gives a lot of emphasis on state control and national security. China uses AI a lot in surveillance and social management. This creates a lot of concerns about privacy and individual rights (which is core in Europe).
4.4. India and developing economies:
The challenges for AI adoption are different in emerging economies. They do have limited resources and have disparities in access. As the focus is on development, there is a need for tailored AI policies. In global cooperation, the needs of these countries have to be taken into account, especially fairness and access. Any failure here could lead to a huge increase in existing inequalities, which can actually slow down global adoption of Generative AI.
Estonia, a small nation in the European Union, is adopting AI in Government Services. It is building an e-governance system which is getting integrated to AI to a network of interoperable chatbots. It is embedded on public websites. This explains how developing countries are adopting unique ways to integrate AI into their daily usage to improve productivity of the society with national interest.
Hence, the huge differences across countries create challenges in achieving global cooperation and collaboration on AI ethics and governance.
5. Responsible AI: core traits
The core traits of responsible AI are explainability, fairness, and trustworthiness/safety. They are interpreted and implemented differently across different regions. Let’s examine this in the following section.
5.1. Explainability
Explainability remains a very critical aspect of Responsible AI. It ensures that users are able to understand how decisions are made.
5.1.1 Europe:
Explainability remains very important. There are regulations within the EU AI Act that require systems to remain fully transparent. Users should be able to understand how decisions are made. This is very important in regulated industries like banking and healthcare. For example, Germany is leading in AI adoption. They focus a lot on industrial applications in the manufacturing sector. Their policies give a lot of importance to ethical use of AI while also ensuring that it is transparent.
5.1.2 United States:
They do value explainability but it is not enforced. There is always a trade off between commercial interest and transparency.
5.1.3 China:
Explainability takes a backseat. Effectiveness and control are two primary things considered here in China. State systems prioritize security, outcomes more than transparency.
5.1.4 Developing nations:
Explainability remains important when AI is deployed in essential services sectors like agriculture, healthcare and finance.
5.2. Fairness
5.2.1 Europe:
In Europe, fairness is closely associated with human rights. They have strong principles to focus and operate on AI systems without bias. The EU AI governance model focuses a lot on bias removal.
5.2.2 United States:
Like Europe, fairness is considered critical in the US. When it comes to race, gender, and economic inequality, they give very high significance. But solutions are always left to tech companies to handle these challenges. The recent California AI Act mandate also doesn’t handle some of the biases like discrimination by business and Government. Also, for this act to pass, industry pressure is going to play a big role.
For example, bias is very high in the hiring industry in the US. They use hiring algorithms to filter and shortlist candidates for the next round of interviews. There have been questions and concerns about bias in such hiring algorithms, which often discriminate based on gender, race or social and economic background. Though the system is designed to be objective, if the data given to them is biased, it can still result in bias and discrimination.
5.2.3 China:
Fairness is defined in China with societal harmony in mind and not based on individual rights. Since their focus is on controlling AI to make sure the country is secure, the fairness is looked upon collectively rather than as individual ones.
In China, facial recognition through AI is used in state surveillance. Though it plays an important role in preventing crimes, there are concerns about personal privacy and the potential for government overreach, as it’s also monitoring ethnic minorities. Hence, public safety, individual rights are still questions that remain unanswered when AI is utilized by the Government in ways that are not considered safe.
5.2.4 Developing regions:
Fairness is again very important to ensure that existing inequality gaps don’t widen. AI has to be inclusive. It has to be accessible to even people from marginalized communities. There is a need to make sure that the benefits are distributed equally across all sections of the society.
5.3. Trustworthiness and Safety:
AI systems have to work without causing any harm. Hence, trust and safety are very important when it comes to AI Governance.
5.3.1 Europe:
Stringent regulations are imposed. It is vital in high risk industries like healthcare, pharma and public services. It is considered an obligation and governments have to ensure that AI used is safe and trustworthy.
For example, autonomous vehicles in Europe face ethical challenges. Safety and liability during accidents is a topic that has been discussed regularly for autonomous vehicle usage. When it comes to safety, who should be prioritized? Should the pedestrian be given priority or should the passenger get priority? Hence, there are several areas where we see gray areas in AI adoption and deployment. There is a clear and urgent need to create a governance framework to handle such tricky situations.
5.3.2 United States:
Safety is often reactive in the US. Typically, when failures or ethical breaches occur, safety measures are implemented. It makes the current environment very challenging particularly in areas like autonomous vehicles and facial recognition technologies.
5.3.3 China:
Trustworthiness is linked to state control. AI is used to enhance societal management. Safety is often secondary whereas state goals are primary.
5.4 Developing regions:
In developing regions, safety is vital. In essential services like healthcare and agriculture, AI usage is expected to be very high. Hence, one has to ensure that AI is reliable and safe for adoption.
6. Implications for global cooperation
Such differences in AI governance models across regions and countries make it challenging to have global cooperation. Different priorities, ethics and structure slow down efforts to harmonize.
6.1 Challenges:
But regional differences could still create a lot of friction. Stringent regulations in Europe may be completely different from the innovation driven approach of the US. This can lead to a lot of challenges in data sharing across geographies, collaborative projects and work. Likewise, China’s state controlled AI policies may be different from western policies on data privacy and human rights.
Global cooperation still remains challenging as different regions have different priorities. Whereas international initiatives like Global Partnership on AI (GPAI) help in collaboration between various governments, academia, and the private sector to develop governance frameworks that make sure AI systems are fair, trustworthy and inclusive.
6.2 Global Partnership on AI:
Governance in western countries with focus on transparency and individual rights, often clash with authoritarian models like China that give priority to state control. But opportunities exist for cooperation. There are forums like the Global Partnership on AI. They can help in aligning AI principles across the globe to promote responsible AI development and usage across the world. It is important to bridge the gap between different governance frameworks to ensure that AI tools and technologies benefit human beings.
6.3 Role of OECD:
The Organisation for Economic Co-operation and Development (OECD) has AI principles that have been adopted by over 40 countries – a set of guidelines for the responsible and trustworthy use of artificial intelligence (AI). These guidelines focus on human rights, inclusivity, robustness, transparency. They provide a shared framework for ethical AI governance.
7. Role of industry in defining frameworks:
The impact of AI varies significantly across different sectors and different regions. Hence, this also plays a key role in different AI frameworks across multiple regions. For example, in healthcare,
7.1 Europe:
Europe stresses a lot on data protection and patient privacy, when it comes to AI powered diagnostic tools. GDPR regulations make sure that personal health data is used with proper oversight and so, ensuring trust in AI related technologies.
7.2 United States:
In the US, the adoption is greater in the financial sector with some areas like fraud detection, risk assessment etc. But, without proper governance, the need to ensure transparency and fairness lies with the teams who are using AI for different purposes.
7.3 China:
In China, AI is playing a crucial role in the defense sector. China has made a huge investment in military AI applications like autonomous weapons and surveillance technologies. When it comes to AI in warfare, ethical concerns and questions arise especially with international laws governing armed conflict.
7.4 Emerging economies:
In developing countries, AI is getting applied in agriculture. Some use cases include optimizing crop yields and waste reduction. Challenges persist when a small scale farmer may not be able to leverage such technologies and implement such solutions. Such varied applications across different industries highlight the fact that there is a need to have governance frameworks that address specific ethical challenges.
In emerging countries like Brazil, India, and Africa, the challenges and opportunities are completely different. For example, in India, AI can be used to empower the economy and reduce the gap in equality among sections of the society by focusing on agriculture, healthcare and education. There is a strategy with focus on improving the efficiency of farming, healthcare improvement in rural regions, education to underprivileged children.
In healthcare, the focus is on AI assisted diagnostics, to detect diseases like pneumonia, TB and cancer through X-rays. Additionally, there are proposals to detect ophthalmological conditions like diabetic retinopathy and cataracts etc. In agriculture, there is a plan to improve the decision making of farmers. For example, climate sensitive advisory services in Indian languages and remote sensing to enhance food security are areas that are getting explored. In finance, AI based credit scoring models can improve inclusion thereby providing loans to small farmers with alternative data (who are not eligible for loans through traditional processes). Infact, AI could be a turning point in India’s financial inclusion initiatives.
But in India, there are some challenges. For example, data governance still remains a problem. There is a need for privacy laws and a framework similar to GDPR in Europe. India is planning to come up with the DPDP Act [Digital Personal Data Protection Act]. Brazil is focused on AI adoption in public security and smart cities. They are looking to handle urbanization challenges like traffic, crime etc. But Brazil has a lot of gaps in its infrastructure that can stop the country from efficiently adopting AI.
In Africa, there is a lot of scope to use AI to grow amid traditional barriers that hamper growth. Kenya is utilizing AI in financial inclusion through mobile payment systems. But challenges include poor local expertise in AI, limited data infrastructure. They can stop the African countries from fully benefiting from this AI revolution. Hence, emerging economies have to balance between growth and ethical concerns related to AI adoption and deployment.
8. Conclusion
There is a substantial difference in AI governance across different regions. While Europe is focused on transparency and individual rights, the US is focused on innovation, but China is state centric. These provide opportunities as well as challenges for responsible AI. Hence, global cooperation among countries is very important to ensure responsible AI development.
Collaboration has to respect regional diversity. Since regions are culturally, ethically and economically diverse, there has to be some flexibility in international policies to accommodate them but also has to ensure AI is ethical, transparent, trustworthy and safe. The need grows as interdependence of AI systems across regions increases.
Global collaboration should also ensure inclusivity is prioritized as it is very important for emerging economies. It has to ensure that AI benefits are distributed equally and ethically across all regions. Having an effective AI governance framework is a long way away. But with collaboration, effective meetings, and dialogues, we should be able to build a culture where AI is focused on serving humans in a responsible and safe manner.