You are on page 1of 2

Essential Ethics for Artificial Intelligence

Garnet Smith

From the Matrix to the Terminator,


Hollywood has shown us countless
examples of artificial intelligence (AI)
taking over and rebelling against
organized society. However, do the real
dangers of artificial intelligence differ
from the fictional rebellion we have seen
in the media? While the potential
dangers of Artificial Intelligence are
indeed decades away, advancement in
AI technologies are bringing us closer to
these actual threats. The Merriam
Webster definition of Artificial
Intelligence is, an area of computer
science that deals with giving machines
the ability to seem like they have human
intelligence. Countless services that we use daily are artificial intelligence,
including virtual personal assistants like Siri, credit card fraud detection, automated
customer support, and media recommendations. AI developers are actively working
to make dramatic improvements in computer vision, natural language processing,
and robotic motion systems.
The Partnership on Artificial Intelligence to Benefit People and Society (POAI)
is a non-profit collaborative organization including representatives from huge tech
companies. Aptly enough, the organization was named by a robot. The group
consists of Amazon, DeepMind, Google, Facebook, IBM, and Microsoft, all leaders in
AI technologies. The nonprofit plans to continually research AI as it advances, to
create regulation for the future of AI development. They believe artificial
intelligence holds great promise of raising the quality of people's lives. This role will
ensure AI can safely fulfill their individual purposes.
The Enderle Group is a marketing consultancy devoted to providing
perspective for todays technological leaders."The partnership should provide more
rigor and initial standardization, particularly around safety protocols, definitions and
certifications, explains Rob Enderle, a key technological analyzer at the Enderle
Group. Partnership on AI has released a set of eight tenets which are outlined in the
following values: Researching and providing an open platform for ethics and new
advancements of AI to be discussed in the community. Address concerns and
possible challenges from the international community while protecting the privacy
of individuals. Strive to create a culture of open trust and cooperation among AI
developers worldwide by supporting the best practices for AI creation.
Two huge AI contributing companies are notably missing from the non-profit.
Apple and Elon Musk have pioneered research and AI tech across several platforms.

Eric Horvitz, the managing director at Microsoft Research explains "Weve been in
discussions with Apple and they're enthusiastic about the effort. I personally hope
to see them join. Apple has been focusing on product development and have a
culture of secrecy throughout the company, which hinders collaboration. As for Elon
Musk, he is already diligently sponsoring another AI research company called
OpenAI. POAI wishes to include even more government officials and other tech
companies to ensure AIs development is safe and transparent to the public.
Artificially intelligent systems have already experienced many problems and
glitches in their respective functions. This year the first international beauty contest
judged by an algorithm was held this year. The Beauty.AI system determines the
faces that most resemble perfect human beauty. Thousands of people sent a
picture of themselves to the newly released AI to identify their attractiveness.
Results suggested that the AI believes darker skin is less beautiful. This sparked a
huge heated controversy surrounding how algorithms can perpetuate bias and yield
offensive results because of it.
Partnership on AI will encounter several challenges that will need to be
addressed quickly to remain a driving force for standardization of AI. Through this
continual growth and change, the group will be able to establish new standards for
AI while eliminating bias and harmful assumption from society. This surfaces the
idea that POAI has no authority over the development of AI and therefore unethical
practices can only be discouraged and not stopped. Even though the organization
wishes to create moderation for AI development, they cannot enforce these
policies. It is the choice of AI developers to follow their code of ethics, however in
the near future this may become a moral obligation for companies.
AI we use on a daily basis isn't innately destructive, the human providing
commands is the real danger surrounding AI. There are two real ways AI can be
developed and used for dangerous purposes. AI programmed as autonomous
weapons with the purpose of killing and devastating pose real threats to people
globally. Weapons of mass destruction implanted with artificial intelligence could
cause massive war outbreaks and mass casualties in the wrong hands. AI that are
developed for a set purpose but become dangerous and destructive are also
possible AI threats. Take for example a self driving car that malfunctions and does
damage to people and property while carrying out its purpose. POAI isn't looking to
regulate what AI can do, rather who uses AI for dangerous and destructive
purposes. They are attempting to prove that AI production does not need
government regulation. This role will be incremental in ensuring that AI stays safe
and can continue to fulfil its purpose of bettering life for people globally.

You might also like