Professional Documents
Culture Documents
Understanding Artificial
Intelligence
Intro
Sitemap
https://sites.google.com/site/understandai/danger
only two feasible scenarios by which a maliciously hostile A.I. might be possible are if it is
deliberately programmed to be hostile (e.g. by a military, terrorist group, or unabomberesque figure),
or if humanitys existence or behaviour is actively and deliberately confounding one of the A.I.s goals
so effectively that the only way to achieve said goal is to wage war with humanity until either
humanitys will or capability to resist and confound is destroyed. For example, an environmentalist
artificial intelligence with the supergoal{reduce levels of dichlorodifluoromethane;carbon
dioxide;nitrous oxide;methane gas in earth atmosphere} might see deindustrialization of
human society as the only viable means, and a violent conflict of interests could ensue.
4 There is effectively no risk of apathetic danger from an A.I. with a friendliness supergoal but it is
almost unavoidable from an A.I. without. An apathetic A.I. is dangerous simply because it does not take
human safety and well being into account, as all humans intrinsically do, when creating strategies and
subgoals. For example, an A.I. in charge of dusting crops with pesticide will dust a field even if it knows
that the farmer is standing in the field inspecting his plants at that moment; without friendliness goals
it has no aversion to dousing the farmer with poisonous spray.
5 This point is called the Singularity. Everything that comes after it is totally unknowable, and any
predictions are total speculation. If there is a single ultimate argument against creating artificial
1 of 2
2014-06-16, 12:34 AM
https://sites.google.com/site/understandai/danger
6: Autobot>>
Comments
You do not have permission to add comments.
Sign in | Recent Site Activity | Report Abuse | Print Page | Powered By Google Sites
2 of 2
2014-06-16, 12:34 AM