You are on page 1of 5

PART-B TYPES OF AGENT PROGRAM

Agent Program is a function that implements the agent mapping from percepts to actions. To perform the mapping task four types of agent programs are there. They are: 1. Simple reflex agents 2. Model-based reflex agents 3. Goal-based agents 4. Utility-based agents 1. Simple reflex agents The simplest kind of agent is the simple reflex agent. It responds directly to percepts i.e. these agent select actions on the basis of the current percept, ignoring the rest of the percept history. An agent describes about how the condition action rules allow the agent to make the connection from percept to action. Actuator Condition action rule: ifscondition then action

Figure: Schematic diagram of a simple reflex agent In the above schematic diagram, the shapes are described as: Rectangle to denote the current internal state of the agents decision process. Oval to represent the background information in the process. The agent program, which is also very simple, is shown in the following figure. function SIMPLE-REFLEX-AGENT (percept) returns an action static: rules, a set of condition-action rules state INTERPRET INPUT(percept) rule RULE MATCH(state, rules) action RULE ACTION[rule] return action Figure: A simple reflex agent INTERRUPT-INPUT function generates an abstracted description of the current state from the percept.

RULE-MATCH -

function returns the first rule in the set of rules that matches the given state description. RULE-ACTION - the selected rule is executed as action of the given percept. The agent in figure will work only if the correct decision can be made on the basis of only the current percept that is, only if the environment is fully observable.
Example: Medical diagnosis system if the patient has reddish brown spots then start the treatment for measles. 2. Model-based reflex agents (Agents that keep track of the world)

The most effective way to handle partial observability is for the agent to keep track of the part of the world it cant see now. That is, the agent combines the current percept with the old internal state to generate updated description of the current state. The current percept is combined with the old internal state and it derives a new current state which is updated in the state description also. This updation requires two kinds of knowledge in the agent program. We need some information about; o First => how the world evolves independently of the agent. o Second => how the agents own actions affect the world. The above two knowledge implemented in simple Boolean circuits or in Actuators complete scientific theories is called a model of the world. An agent that uses such a model is called a model- based agent. Figur e: A model based reflex agent The agent

program is shown below. It keeps track of the current state of the world using an internal model. It then chooses an action in the same way as the reflex agent. function REFLEX-AGENT-WITH-STATE (percept) returns an action static: state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state UPDATE-STATE(state, action, percept) rule RULE-MATCH(state, rules) action RULE-ACTION[rule] return action Figure: A model-based reflex agent UPDATE-STATE - This is responsible for creating the new internal state description by combining percept and current state description.

3. Goal-based agents An agent knows the description of current state and also needs some sort of goal information that describes situations that are desirable. The action that matches with the current state is selected depending on the goal state. The goal based agent is more flexible for more than one destination also. After identifying one destination, the new destination is specified, goal based agent is activated to come up with a new behavior. Search and Planning are the subfields of AI devoted to finding action sequences that achieve the agents goals.

Actuators Figure: A model-based, goal-based agents The goal-based agent appears less efficient, it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified. The goal-based agents behavior can easily be changed to go to a different location.
4. Utility-based agents (Utility refers to the quality of being useful)

An agent generates a goal state with high quality behavior (utility) that is, if more than one sequence exists to reach the goal state then the sequence with more reliable, safer, quicker and cheaper characteristics than others is to be selected. A utility function maps a state (or sequence of states) onto a real number, which describes the associated degree of happiness. The utility function can be used for two different cases: First, when there are conflicting goals, only some of which can be achieved (for e.g., speed and safety), the utility function specifies the appropriate tradeoff. Second, when the agent aims for several goals, none of which can be achieved with certainty, then the success can be weighted up against the importance of the goals.

Actuators

Figure: A model-based, utility-based agent

PART-A

1. What is artificial intelligence? The art of creating machines that perform functions that require intelligence when performed by human beings It leads to four important categories. i) Systems that think like humans ii) Systems that act like humans iii) Systems that think rationally iv) Systems that act rationally 2. Define rational agent. A rational agent is one that does the right thing. For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever builtin knowledge the agent has. 3. List the properties of task environments. i) Fully observable vs. partially observable. ii) Deterministic vs. stochastic. iii) Episodic vs. sequential. iv) Static vs. dynamic. v) Discrete vs. continuous. vi) Single agent vs. multiagent. 4. List the steps involved in simple problem solving. i) Goal formulation ii) Problem formulation iii) Search iv) Search Algorithm v) Execution phase 5. Differentiate Uninformed Search and Informed Search Strategies S.No Uninformed (or) Blind Search Informed (or) Heuristic Search 1 No additional information beyond Uses problem-specific knowledge the problem definition is provided beyond the definition of the problem itself 2 Less effective More effective Additional information can be 3 No information about added as assumption to solve number of steps or path cost
the problem

4 Eg. BFS, DFS, Bi-directional Search Eg. A* Search, Best First Search 6. Define Constraint Satisfaction Problem (CSP) A CSP is a special kind of problem that satisfies some additional structural properties beyond the basic requirements for problem in general. In CSP, the states are defined by the values of a set of variables and the goal test specifies a set of constraint that the value must obey. 7. Define Greedy Best First Search Expand the node that appears to be closest to the goal i.e. it is likely to lead to a solution quickly Evaluate nodes using the heuristic function: f(n) = h(n) Evaluation function h(n) (heuristic)is an estimate of cost from n to the closest goal

You might also like