You are on page 1of 8

KIET GROUP OF INSTITUTIONS

Department Of Computer Applications


ARTIFICIAL INTELLIGENCE (RCA 403)
Class Test - 1 Examination Even Semester 2017 – 18

Duration: 02:00 Hours MM: 60

SECTION – A

Q1. Objective questions: (2X6=12)

a) What are the components for measuring the performance of problem solving? [CO-2]
b) A search algorithm takes _________ as an input and returns ________ as an output.
[CO-2]
c) Difference between agent function and agent program. [CO-1]
d) Explain Computer Vision? [CO-1]
e) Mention the criteria for the evaluation of search strategy [CO-2]
f) Write down the names of applications of AI. [CO-1]

SECTION – B

Attempt any FIVE questions (6X5=30)

2) Why is the Chinese room argument impractical and how would we would we have to
change the Turing test so that it is not subject to this criticism? [CO-1]
3) Both the performance measure and the utility function measure how well an agent is
doing. Explain the difference between the two? [CO-1]
4) Model the 8-puzzle problem as a state space search problem. Define the initial and
goal states. [CO-2]
5) Quote an example that makes difference between Rational Agent and Omniscience
Agent? Also highlight Learning and Autonomy. [CO-1]
6) In describing intelligent agents it is often convenient to specify them in terms of
Percepts, Actions, Goals and Environment. State briefly what each of these PAGE
concepts mean. [CO-1]
7) Elaborate the foundation of Artificial Intelligence. [CO-1]
8) Define the term – natural intelligence and artificial intelligence? How do you
differentiate between the two? [CO-1]
9) Why is search an important component of an AI system? Explain how “State Space
Search” representation used for solving problems through AI. [CO-2]
SECTION – C

Attempt any TWO questions (9X2=18)

10) Consider two intelligent agents playing chess with clock. One of them is called “Deep
Blue”, while the other is called “Gary Kasparov”.
(a) Specify the task environment (PEAS Description) for “Deep Blue”.
(b) Determine each of the following properties of this task environment:
i. Fully observable or partially observable.
ii. Deterministic or stochastic
iii. Episodic or sequential
iv. Static, dynamic or semi-dynamic
v. Discrete or continuous
vi. Single or multi-agent
Explain your answer. [CO-1]

11) What are intelligent agents? Explain different types of Intelligent Agents? [CO-1]
12) On one bank of a river are three missionaries and three cannibals. There is one boat
available that can hold up to two people and they would like to use to cross the river.
If the cannibals ever outnumber the missionaries on either of the river’s banks, the
missionaries will get eaten. How can the boat be used to safely carry all the
missionaries and cannibals across the river? Generate production rules and solve the
problem. [CO-2]

******************************************************************************************************

Solutions

Q1.

a. The components for measuring the performance of problem solving are:


 Completeness
 Optimality
 Space complexity and Time complexity

b. A search algorithm takes problem as an input and returns solution as an output.


c. Agent Function: An agent’s behavior is described by the agent function that maps any given
percept sequence to an action.
Agent program: The percept table is an external characterization of an agent. Internally the
agent function for an artificial agent will be implemented by an agent program. The obvious
question, then, is this: What is the right way to fill out the table? In other words, what makes
an agent good or bad, intelligent or stupid? We should emphasize that the notion of an agent
is meant to be a tool for analyzing systems, not an absolute characterization that divides the
world into agents and non-agents.
d. Computer vision is a field of computer science that works on enabling computers to see,
identify and process images in the same way that human vision does, and then provide
appropriate output. Computer vision's goal is to not only see, but also process and provide
useful results based on the observation artificial systems that obtain information from images
or multi-dimensional data.

Applications are face recognition, object recognition, location recognition and tracking,
Forensics, Virtual Reality, Robotics, Navigation & Security and so on.
e. Application areas of Artificial Intelligence:
(i) Game Playing: Games are interactive computer programs, an emerging area in which
the goals of human-level AI are pursued
(ii) Speech Recognition: A process of converting a speech to a sequence of words. In
1990s, computer speech recognition reached a particular level for limited purposes
(iii) Understanding Natural Language: Natural Language Processing (NLP) does automated
generation and understanding of natural human languages
(iv) Computer vision: It is the combination of concepts, techniques and ideas from digital
image processing, pattern recognition, AI and computer graphics
(v) Expert system: it enable the system to diagnose situations without human experience
being present.

Q2.

In Chinese room argument, critics claim that the man in the room does not understand Chinese to the
conclusion that no understanding has been created. Other critics concede Searle's claim that just
running a natural language processing program as described in the CR scenario does not create any
understanding, whether by a human or a computer system.

Two of the objections cited by Turing are worth considering further. Lady Lovelace’s Objection, first
stated by Ada Lovelace, argues that computers can only do as they are told and consequently cannot
perform original (hence, intelligent) actions. This objection has become a reassuring if somewhat
dubious part of contemporary technological folklore. Expert systems, especially in the area of
diagnostic reasoning, have reached conclusions unanticipated by their designers. Indeed, a number of
researchers feel that human creativity can be expressed in a computer program.

Q3.

Performance measure is how we evaluate a agent/bot's behaviour. So this generally maps to the
expected behaviour we have from the agent. In contrast, utility function is a function internally used
by the agent to evaluate its performance.

Performance measure vs. utility function

• A performance measure (typically imposed by the designer) is used to evaluate the


behaviour of the agent in environment. It tells does agent do what it’s supposed to do in the
environment
• A utility function is used by an agent itself to evaluate how desirable states are. Some paths
to the goal are better (more efficient) than others –which path is the best
• Does agent do what it’s supposed to do vs. does agent do it in optimal way
• The utility function may not be the same as the performance measure
• An agent may have no explicit utility function at all, whereas there is always a performance
measure

Q4.

The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the frame is
always empty thus making it possible to move an adjacent numbered tile into the empty cell. Such a
puzzle is illustrated in following diagram.
Q5.

An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience
is impossible in reality.

A rational agent not only to gather information, but also to learn as much as possible from what it
perceives. The agent‘s initial configuration could reflect some prior knowledge of the environment,
but as the agent gains experience this may be modified and augmented.

Successful agents split the task of computing the agent function into three different periods: when the
agent is being designed, some of the computation is done by its designers; when it is deliberating on
its next action, the agent does more computation; and as it learns from experience, it does even more
computation to decide how to modify its behaviour.

A rational agent should be autonomous – it should learn what it can to compensate for partial or
incorrect prior knowledge. Concrete implementation, running on the agent architecture.

Example:

Consider the following example:

 –I am walking along the Champs Elysees in Paris one day and I see an old friend across the
street.
 –There is no traffic nearby and I'm not otherwise engaged, so, being rational, I start to cross
the street.
 –Meanwhile, at 33,000 feet, a cargo door falls off a passing airliner, and before I make it to
the other side of the street I am flattened.
 –Was I irrational to cross the street?

The above example shows that rationality is not the same as perfection. Rationality maximizes
expected performance, while perfection maximizes actual performance. Retreating from a
requirement of perfection is not just a question of being fair to agents.

Q6.

The concept of intelligent agent is central in AI. AI aims to design intelligent agents that are useful,
reactive, autonomous and even social and pro-active. An agent perceives its environment through
percept and acts through actuators. Knowing the current state of the environment is not enough.

The agent needs some goal information. Agent program combines the goal information with the
environment model to choose the actions that achieve that goal. Consider the future with “What will
happen if I do A?” Flexible as knowledge supporting the decisions is explicitly represented and can be
modified.
Q7.

Q8.

Human Intelligence is defined as the quality of the mind that is made up of capabilities to learn from
past experience, adaptation to new situations, handling of abstract ideas and the ability to change
his/her own environment using the gained knowledge. Human intelligence revolves around adapting
to the environment using a combination of several cognitive processes. The field of Artificial
intelligence focuses on designing machines that can mimic human behavior. However, AI researchers
are able go as far as implementing Weak AI, but not the Strong AI.

Q9.

Search has always been a crucial element of AI in multiple ways. First, what many people refer to as
"search" is a reflection of how what we call "intelligence" frequently involves searching something: a
physical realm, a "state space" of possible solutions, a "knowledge space" where
ideas/facts/concepts/etc. are related as a graph structure, etc.

State space search Representation: Problem solving = Searching for a goal state

 Problem: It is the question which is to be solved. For solving the problem it needs to be
precisely defined. The definition means, defining the start state, goal state, other valid states
and transitions.
 Search space: It is the complete set of states including start and goal states, where the answer
of the problem is to be searched.
 Search: It is the process of finding the solution in search space. The input to search space
algorithm is problem and output is solution in form of action sequence.
 Well-defined problem: A problem description has three major components: initial state, final
(goal) state, space including transition function or path function. A path cost function assigns
some numeric value to each path that indicates the goodness of that path. Sometimes a
problem may have additional component in the form of heuristic information.
 Solution of the problem: A solution of the problem is a path from initial state to goal state.
The movement from start states to goal state is guided by transition rules. Among all the
solutions, whichever solution has least path cost is called optimal solution.

Q10.

Two intelligent agents playing chess with clock, following are the PEAS description for “Deep Blue”, a
chess player agent:

 Performance measure: winning the game,


 Environment: chesspieces on a chess-board, adversary,
 Actuators: screen,
 Sensors: camera, keyboard.

Determine each of the following properties of this task environment:

 Fully observable
 Deterministic or strategic are both ok.
 Sequential
 Semi-dynamic
 Discrete
 Multi-agent

Q11.

Four basic kinds of agent programs that embody the principles underlying almost all intelligent
systems:

 Simple reflex agents;


 Model-based reflex agents;
 Goal-based agents; and
 Utility-based agents.
 Learning agents.

Simple reflex agents:

The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the
current percept, ignoring the rest of the percept history. For example, vacuum cleaner agent.
Model-based reflex agents:

Model-based reflex agents are used in situations of partial visibility. This agent don’t know what will
happen if it will take some action against the current percept.

Goal-based agent:

If you are having specified goals, it makes the problem simpler. Knowing something about the current
state of the environment is not always enough to decide what to do. In this situation goal will help you
to take the decision.

Utility-based agent:

Goals alone are not enough to generate high-quality behavior in most environments. Goals just
provide a crude binary distinction between “happy” and “unhappy” states. An agent’s utility function
is essentially an internalization of the performance measure. If the internal utility function and the
external performance measure are in agreement, then an agent that chooses actions to maximize its
utility will be rational according to the external performance measure.

Learning agent:

You might also like