📘 AI Chapter 12: Top 20 Predicted Questions

🧠 With Full Answer Templates

Q1. Define an intelligent agent. Give examples.

Agent: An agent perceives its environment through sensors and acts upon it using actuators.

Intelligent Agent: An agent that acts rationally by taking the best possible action to maximize performance.

Rationality: Choosing the action expected to maximize goal achievement based on percept history and built-in knowledge.

Examples:

PEAS: Performance measure, Environment, Actuators, Sensors — used to define agent tasks.

[Draw a labeled Agent-Environment Loop Diagram]

Q2. Compare Simple Reflex Agent vs Learning Agent.

Agents can be categorized based on their capability to learn and remember.

AspectSimple Reflex AgentLearning Agent
MemoryNo memory of pastStores past experiences
LearningCannot learnImproves via learning element
Goal-basedNo goalsGoal-oriented behavior
ExampleVacuum cleanerChess-playing agent
[Include Learning Agent diagram: Learning Element, Performance Element, Critic, Problem Generator]

Q3. Classify task environments using 6 properties.

Use FEDSSD to classify environments:

AgentProperties
ChessPartially, Sequential, Deterministic, Static, Discrete, Multi-agent
TaxiPartially, Sequential, Stochastic, Dynamic, Continuous, Multi-agent
Vacuum CleanerPartially, Episodic, Deterministic, Static, Discrete, Single-agent

Q4. Define and explain BFS and DFS with one example.

BFS (Breadth-First Search): Explores level by level using a queue. Completeness: Yes. Optimal: Yes (unit cost).

DFS (Depth-First Search): Explores depth-wise using a stack or recursion. Completeness: No. Optimal: No.

[Draw a tree showing BFS vs DFS node expansion]

Example: Finding node 'G' in a tree rooted at 'A'.

CriteriaBFSDFS
Data StructureQueueStack
SpaceO(bd)O(bd)
TimeO(bd)O(bm)
OptimalYes (if cost=1)No

Q5. Solve a given graph using UCS and A*.

Uniform Cost Search (UCS): Expands nodes with lowest path cost g(n).

A* Search: Uses f(n) = g(n) + h(n), where h(n) is a heuristic estimate to goal.

Example Graph: A connected graph with nodes A to G, each with edge costs and heuristic values.

Nodeg(n)h(n)f(n) = g(n) + h(n)
A01010
B2810
C459

• UCS expands the lowest g(n). A* expands lowest f(n).

[Show step-by-step expansion and final path traced]

Q6. Define admissible and consistent heuristics.

Admissible Heuristic: A heuristic h(n) is admissible if it never overestimates the cost to reach the goal from node n. Formally:

h(n) ≤ h*(n) for all n, where h*(n) is the true cost to reach the goal.

Consistent (Monotonic) Heuristic: A heuristic h(n) is consistent if:

h(n) ≤ c(n, a, n') + h(n') for every node n and successor n', where c(n, a, n') is the cost of action a.

Example (8-Puzzle):

Impact on A*:

Q7. Explain CSP with an example.

Constraint Satisfaction Problem (CSP): A CSP is defined by:

Example – Map Coloring:

[Draw a map of Australia showing connected regions]

Alternate Example – SEND + MORE = MONEY:

[Draw constraint graph with nodes and binary constraints]

Q8. What is arc consistency? Explain AC-3.

Arc Consistency: An arc X → Y is consistent if for every value in the domain of X, there is some allowed value in Y satisfying the constraint.

AC-3 Algorithm: A popular algorithm to enforce arc consistency on all arcs in a CSP.

Steps:

  1. Initialize queue with all arcs
  2. While queue not empty:
    • Remove (Xi, Xj)
    • If domain of Xi is revised, add all arcs (Xk, Xi) back

Pseudocode:

function AC-3(csp):
    queue ← all arcs in csp
    while queue is not empty do:
        (Xi, Xj) ← REMOVE-FIRST(queue)
        if REVISE(csp, Xi, Xj) then
            if domain of Xi is empty then return false
            for each Xk in neighbors[Xi] do
                add (Xk, Xi) to queue
    return true
    

Example: X ∈ {1,2,3}, Y ∈ {2,3}, with constraint X < Y → Remove 3 from X

Q9. Translate English statements to First-Order Logic (FOL).

Step-by-Step Method:

  1. Identify Quantifiers (∀, ∃)
  2. Find Predicates (e.g., Human(x), Mortal(x))
  3. Construct implication

Example 1: “All humans are mortal”

FOL: ∀x (Human(x) → Mortal(x))

Example 2: “Some birds can fly”

FOL: ∃x (Bird(x) ∧ CanFly(x))

Example 3: “Every student loves some subject”

FOL: ∀x (Student(x) → ∃y (Subject(y) ∧ Loves(x, y)))

Q10. What is entailment? Compare model checking vs theorem proving.

Entailment (⊨): A sentence α entails β if β is true in all models where α is true.

α ⊨ β ⇔ In every model where α is true, β is also true.

Approaches to Test Entailment:

MethodDescriptionExample
Model Checking Enumerate all models and check if β holds when α is true Small logical domains
Theorem Proving Use inference rules (e.g., Modus Ponens) to derive β from α Prolog, logic-based AI

Example: If α = “It is raining → ground is wet”, β = “If raining, ground is wet” → α ⊨ β

Q11. Explain Minimax algorithm with a game tree.

Minimax Algorithm: A decision rule for minimizing the possible loss in a worst-case scenario. Used in 2-player, turn-based games like Tic-Tac-Toe, Chess.

Game Tree:

            MAX
         /   |   \
       3     12    8
      ↑     ↑     ↑
      MIN  MIN   MIN
      

Propagation: MIN selects min(3,12,8) → MAX chooses max(3,8,12) = 12

Q12. What is alpha-beta pruning? Show how it works.

Alpha-Beta Pruning: Optimization of the Minimax algorithm that avoids exploring parts of the tree that won't affect the final decision.

Condition for Pruning: If β ≤ α, prune the branch

        MAX
       /    \
     MIN    MIN
    / | \   / | \
   3  5  6  1  (7) ← pruned
      

Pruning improves efficiency without changing the result.

Q13. Define a planning problem with STRIPS.

Planning: Finding a sequence of actions from initial state to goal state.

STRIPS (Stanford Research Institute Problem Solver): Framework for automated planning.

Elements:

Example – Spare Tire Problem:

Q14. What is PDDL? Write PDDL for a pickup action.

PDDL (Planning Domain Definition Language): A standard language for expressing planning domains and problems.

Structure:

Example – Pickup Action:

(:action pickup
  :parameters (?obj ?loc)
  :precondition (and (at ?obj ?loc) (handempty))
  :effect (and (holding ?obj) (not (at ?obj ?loc)) (not (handempty)))
)
    

This defines that the agent can pick up an object if it's at a location and the agent’s hand is empty.

Q15. Explain components of a learning agent.

Learning Agent: An agent that can learn from experience and improve its performance.

Components:

[Diagram: Performance Element → Environment → Critic → Learning Element → Performance Element (cycle)]

Example: A chess-playing robot:

Q16. Differentiate Supervised, Unsupervised, and Reinforcement Learning.

Supervised Learning: Model learns from labeled data.

Unsupervised Learning: Model identifies patterns in unlabeled data.

Reinforcement Learning (RL): Agent learns through trial-and-error with rewards and punishments.

Type Data Goal Example
Supervised Labeled Predict output Email spam detection
Unsupervised Unlabeled Find structure/patterns Customer segmentation
Reinforcement Interaction with environment Maximize reward Game-playing agent (e.g., AlphaGo)

Key RL Terms: Reward (feedback signal), Policy (strategy to choose actions)

Q17. What are the main tasks in NLP?

Natural Language Processing (NLP) enables machines to understand and generate human language.

Main Tasks:

Applications: Chatbots, Machine Translation, Sentiment Analysis

Q18. Describe an expert system. What are its components?

Expert System: A computer system that emulates decision-making ability of a human expert.

Components:

[Diagram: User ↔ UI ↔ Inference Engine ↔ Knowledge Base]

Example: MYCIN (medical diagnosis expert system)

Q19. Differentiate between forward and backward chaining.

Forward Chaining: Starts from known facts and applies inference rules to reach goal.

Backward Chaining: Starts from goal and works backwards to determine supporting facts.

Aspect Forward Chaining Backward Chaining
Direction Data → Goal Goal → Data
Use Case Data-driven systems Goal-driven reasoning
Example Rule-based expert system Prolog program

Example: If A → B, B → C
Known: A
Forward chaining → conclude C
Backward chaining ← Start from C, prove A

Q20. Write short notes on:

(a) STRIPS

STRIPS: A formal language to define planning problems using states and actions.
Contains: Preconditions, Add, Delete lists.
Example: Move(robot, A, B)

(b) Turing Test

Turing Test: Proposed by Alan Turing to evaluate machine intelligence. A machine passes if it can fool a human into thinking it’s also human during conversation.

(c) Reflex Agent

Reflex Agent: Acts based only on current percept.
Example: Thermostat turns on/off heater based on temperature.