Home / AI / The Ethics and Values at the Core of AI

The Ethics and Values at the Core of AI

After more than 50 years of intensive Artificial intelligence (AI) development, it has rapidly become one of the fastest growing technologies globally. AI holds the promise of a huge transformative power that could produce significant socioeconomic and environmental benefits, productivity gains, and growth and prosperity for society – if it is developed wisely.

AI: The Good, the Bad, and the Ugly

By definition, AI involves programming high-performance computers to think, learn, and behave in a similar way to humans through a complex system of algorithms. AI is designed to help people make better predictions and informed decisions, and perform routine and complex tasks better than humans. Potentially, this could have huge benefits for a wide-range of sectors, including healthcare, mobility, agriculture, education, and communication.

  • Mobility:  Intelligent transportation will make traffic safer, faster, more efficient, and greener.
  • Healthcare: Assisted diagnoses, early prevention, and precision treatment will have huge benefits for people’s health.
  • Education: AI opens the door to personalized learning; bridging the skills gaps; and optimizing workloads for teachers, student and administrators.
  • Agriculture: Potential benefits include increased agricultural efficiency, improved crop yields, traceability, and lower food production costs
  • Manufacturing: Improved productivity and lower costs will span every step of the manufacturing process.
  • Cross-culture communications: Real-time translation across multiple languages will make communication easier than ever before.

However, alongside its potential, AI is fueling anxieties, including legal and ethical concernsabout the trustworthiness of AI systems. The possibilities of the risks it poses on society include cultural values, privacy, fraud, unemployment, and bias.

If left to its own devices free from human judgment or intervention, AI-generated errors can lead to devastating consequences, especially in critical and sensitive fields such as healthcare, traffic safety, investment decisions, cyberbullying, economic shifts, and job losses.

Who’s in charge?

No single actor can be accountable for or provide answers to all the concerns about how to address the risks and challenges posed by AI, especially given its meteoric evolution. As a shared responsibility among all stakeholders,

the good news is that international and national initiatives are already underway to establish the necessary technological and strategic frameworks and guidelines to ensure that AI is an ethically, socially, legally and ecologically responsible technology.

The objectives are designed to govern AI evolution with key principles to ensure the transparency, accountability, lawfulness, privacy, safety, human dignity, well-being, and environmental sustainability. They also seek to ensure that AI prioritizes the public good (AI for Good) over commercial and geopolitical gains.

Because of the huge complexity of AI systems, those objectives can’t be met unless close collaboration is established among three key actors:

1) Technology and industry

2) Regulators

3) Academia and education institutions

1. Technology & Industry: AI development platforms and their underlying technologies should offer a transparent, robust, trustworthy, secure, and safe AI infrastructure with the necessary mechanisms and tools to address data privacy, right to access, accountability and data protection. AI development infrastructure should enable developers and data scientists of different skill levels to rapidly build, train, test and deploy models, including data preparation, model development, optimization, and deployment. Other responsibilities include creating trustworthy algorithms with all the required tools, using secure and robust computing power, cloud infrastructure, and high-speed networking with sufficient storage capabilities.

2. Regulations: Globalinitiatives are in progress to generate corresponding AI frameworks, strategies, and guidelines to ensure responsible AI objectives will be met throughout the AI lifecycle with key three requirements to be incorporated: iii) Lawfulness: complying with applicable laws and regulations; ii) Ethical: ensuring adherence to principles and values; and iii) Robustness: both from a technical and social perspective,

Ideally, these three requirements must be coordinated in harmony with each other and key stakeholders to generate a pathway for responsible AI without over-regulation, as the technology is still evolving and needs to be embraced and adjusted over the years to come.

3. Education and Academia: Academia and research will play an essential role for steering AI in the right direction for the common good.

Education institutions will be at the center of developing AI’s educational path, delivering the next generation of developers and researchers to place more emphasis on being ethically and value-driven.

Creating an educational atmosphere with the latest AI technology infrastructure and innovation hubs available can enable researchers and developers to work more closely with industry and communities through integrated knowledge-based AI-ecosystem platforms. This will help ensure that the design and development of AI systems is meaningful, adaptable, responsible, and robust enough to be trusted by the outside world.

It will also help universities and education institutions to generate adaptable policies and guidelines for using AI and continually re-validate these policies as technologies evolve.

New AI technologies can be quickly evaluated and used to facilitate learning, promote creativity, and thinking to find ways to embrace or integrate AI into teaching methodologies, while also maintaining academic standards and integrity. Teachers might redesign their courses, implementing more oral exams, promoting creativity through team work and handwritten essays, or integrating AI into classes by asking students to challenge its outputs.

A call for more collaboration and global dialogue

As AI has emerged as a top priority for all nations, international collaboration in generating guidelines and regulations are, now more than ever, becoming necessary to address the challenges presented by AI in terms of ethics, lawfulness, trustworthiness, and broader philosophical issues.

Technology, industry, academia, and regulators need to work together to establish best practices for universally accepted standards to maximize the potential of AI potential and mitigate its potential risks. Ensuring the harmony of technologies, industry, society, and regulations can in turn ensure that AI is beneficial to all.

It is certain that education will definitely play a key role by educating future AI professionals, students, younger generations, and society on the importance of accountability and the ethical considerations in AI development and applications.

Ethics and values have to be embedded in AI development by design.

Keeping humanity in the AI-loop must be thoroughly considered. Concepts such as swarm AI, for example, can amplify intelligence by building connected intelligent systems comprising people. Converging the power of algorithms with human expertise, knowledge, creativity, and intuition can enable people to bring unique ideas, wisdom, and insights to benefit AI’s outcomes.

AI systems need to be human-centric based on a commitment to use AI for the service of humanity and the common good, supporting global cooperation, and the achievement of the SDGs with the objective  of improving humanity’s quality of life, while leveraging human knowledge, wisdom, values, morals, and sensibilities.


Article Source: HuaWei

Article Source: HuaWei

Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.

Leave a Reply

x

Check Also

RapidFlight Announces Cutting-Edge Mobile Production System

RapidFlight, an integrated designer and mass manufacturer of unmanned aircraft systems (UAS), ...