Professional Documents
Culture Documents
Garnet Smith
Eric Horvitz, the managing director at Microsoft Research explains "Weve been in
discussions with Apple and they're enthusiastic about the effort. I personally hope
to see them join. Apple has been focusing on product development and have a
culture of secrecy throughout the company, which hinders collaboration. As for Elon
Musk, he is already diligently sponsoring another AI research company called
OpenAI. POAI wishes to include even more government officials and other tech
companies to ensure AIs development is safe and transparent to the public.
Artificially intelligent systems have already experienced many problems and
glitches in their respective functions. This year the first international beauty contest
judged by an algorithm was held this year. The Beauty.AI system determines the
faces that most resemble perfect human beauty. Thousands of people sent a
picture of themselves to the newly released AI to identify their attractiveness.
Results suggested that the AI believes darker skin is less beautiful. This sparked a
huge heated controversy surrounding how algorithms can perpetuate bias and yield
offensive results because of it.
Partnership on AI will encounter several challenges that will need to be
addressed quickly to remain a driving force for standardization of AI. Through this
continual growth and change, the group will be able to establish new standards for
AI while eliminating bias and harmful assumption from society. This surfaces the
idea that POAI has no authority over the development of AI and therefore unethical
practices can only be discouraged and not stopped. Even though the organization
wishes to create moderation for AI development, they cannot enforce these
policies. It is the choice of AI developers to follow their code of ethics, however in
the near future this may become a moral obligation for companies.
AI we use on a daily basis isn't innately destructive, the human providing
commands is the real danger surrounding AI. There are two real ways AI can be
developed and used for dangerous purposes. AI programmed as autonomous
weapons with the purpose of killing and devastating pose real threats to people
globally. Weapons of mass destruction implanted with artificial intelligence could
cause massive war outbreaks and mass casualties in the wrong hands. AI that are
developed for a set purpose but become dangerous and destructive are also
possible AI threats. Take for example a self driving car that malfunctions and does
damage to people and property while carrying out its purpose. POAI isn't looking to
regulate what AI can do, rather who uses AI for dangerous and destructive
purposes. They are attempting to prove that AI production does not need
government regulation. This role will be incremental in ensuring that AI stays safe
and can continue to fulfil its purpose of bettering life for people globally.