2.1 What is an Agent?
An agent is anything that perceives its environment through sensors and acts upon that environment through actuators.
Formal Definition:
An agent is an autonomous entity that uses sensors to observe its surroundings (percepts), and actuators to act in that environment.
2.2 Agent-Environment Loop
Here’s how an agent works:
[ Environment ] ↓ (percept) +-----------------+ | Sensor | +-----------------+ ↓ +-----------------+ | Agent Logic | | (decision-making) | +-----------------+ ↓ +-----------------+ | Actuator | +-----------------+ ↓ (action) [ Environment changes ]
The agent continuously:
- Observes the environment (percepts)
- Thinks (based on built-in rules or learning)
- Acts to change the environment
2.3 What is a Rational Agent?
A rational agent is one that does the right thing — i.e., it acts to maximize its performance measure based on the percept history and its knowledge.
2.4 Performance Measures
Agent | Performance Measure |
---|---|
Chess-playing AI | Win the game |
Delivery Robot | Deliver to correct location efficiently |
Email Classifier | Correct spam detection rate |
Smart Vacuum | Clean the floor quickly and thoroughly |
2.5 The PEAS Framework
PEAS = Performance measure, Environment, Actuators, Sensors. This framework helps you design or analyze an AI agent.
Example: Autonomous Taxi
Component | Description |
---|---|
Performance | Safe driving, fast delivery, obey traffic laws |
Environment | Roads, traffic, passengers, pedestrians |
Actuators | Steering, accelerator, brake, display, horn |
Sensors | GPS, camera, radar, LiDAR, speedometer |
2.6 Agent Types (with Diagrams)
There are five standard types of agents, progressing in complexity:
- Simple Reflex Agent: Acts only on current percept using IF-THEN rules.
- Model-Based Reflex Agent: Uses internal state (memory) to keep track of unseen parts of the world.
- Goal-Based Agent: Makes decisions to achieve a goal.
- Utility-Based Agent: Chooses the most desirable outcome when multiple goals exist.
- Learning Agent: Improves its performance over time through experience.
2.7 Summary Table: Agent Types
Type | Memory | Goals | Learning | Example |
---|---|---|---|---|
Simple Reflex | ❌ | ❌ | ❌ | Basic vacuum cleaner |
Model-Based Reflex | ✅ | ❌ | ❌ | Wumpus agent |
Goal-Based | ✅ | ✅ | ❌ | Maze-solving robot |
Utility-Based | ✅ | ✅ | ❌ | Route optimizer |
Learning Agent | ✅ | ✅ | ✅ | AI playing chess, spam filter |
2.8 Common Exam Questions
- Define an intelligent agent with two examples.
- What are the differences between reflex and goal-based agents?
- Explain the PEAS framework using an interactive tutor.
- Draw and explain the architecture of a learning agent.
- What makes a rational agent different from a rule-based agent?