diff --git a/code-studio/features/ask.md b/code-studio/features/ask.md
index d33c361..ebf7bf9 100644
--- a/code-studio/features/ask.md
+++ b/code-studio/features/ask.md
@@ -1,56 +1,78 @@
---
title: Syncfusion Code Studio Ask Mode
-description: Ask Mode in Code Studio IDE provides quick, AI‑powered explanations, examples, and best practices without altering your project files.
+description: Ask Mode in Code Studio provides AI-powered explanations, code examples, and best practices in a non-invasive conversational interface that does not modify workspace files.
platform: syncfusion-code-studio
keywords: code, IDE, AI, developer-tools, syncfusion, code-assistance, productivity, UI-generation, bug-fixing, documentation
---
-# Ask
-## Description
-Ask Mode in Code Studio IDE is a simple way to get quick help while coding. Instead of digging through long documentation or searching online, you can just ask questions directly in the IDE. Code Studio Chat will give you explanations, examples, or best practices based on what you ask—without touching your project files.
+# Ask
-## Purpose
-The purpose of Ask Mode is to make learning and problem solving easier.It helps new users understand coding concepts, see ready-to-use code snippets, and get clear guidance on how to apply patterns or practices—all in a fast, conversational way.
+## Feature Overview
-## How to use ask mode in Code Studio
+Ask Mode is a conversational interface within Code Studio that provides immediate technical guidance without modifying your workspace. It delivers explanations, code examples, and best practice recommendations based on your queries, operating independently of your project files. This mode is designed for knowledge acquisition and exploration rather than direct code manipulation.
-### Step 1: Select Ask Mode
-- Launch Code Studio IDE.
-- Open the chat view of Code Studio, select **Ask** from the agents dropdown.
+## Use Cases
-
+**Use Ask Mode when you need:**
-### Step 2: Ask Your Question
-- Type your question in plain language, no special format needed.
-- You can ask about coding concepts, request code snippets, or seek best practice guidance.
+- **Quick Technical Clarifications** - Resolve coding questions without external documentation searches
+- **Code Snippet Generation** - Obtain ready-to-use code examples for specific programming tasks
+- **Concept Explanations** - Understand design patterns, framework architectures, or language features through concise technical summaries
+- **Non-Invasive Exploration** - Test ideas and validate logic without workspace file modifications or context dependencies
-**Example:**
+## Prerequisites
-
+**Code Studio Installation** - Download and configure the IDE: [Install and Configuration](/code-studio/getting-started/install-and-configuration)
-**Response:**
+## How to Use Ask Mode in Code Studio
-
+### Step 1: Activate Ask Mode
-### Step 3: Get Instant Guidance
-- Code Studio Chat will respond with explanations, examples, or summaries.
-- The answers are conversational and easy to follow, like having a mentor guide you.
-- You don’t need to share your project files—Ask Mode works only with the context you provide.
+- Launch Code Studio IDE
+- Open the Code Studio chat panel
+- Select **Ask** from the agent dropdown menu
-### Step 4: Apply What You Learn
-- Copy code snippets directly into your project.
-- Use explanations to understand concepts before implementing them.
-- Refer back to Ask Mode whenever you need quick clarification.
+
-## Why use ask mode?
-Ask Mode is designed for learning and exploration. It’s perfect when you want:
-- Quick answers without digging through documentation.
-- Ready-to-use code snippets for common tasks.
-- Clear explanations of design patterns and best practices.
+### Step 2: Submit Your Query
+
+Type your query using natural language. For optimal results:
+- State the programming language or framework explicitly
+- Include relevant error messages or stack traces when troubleshooting
+- Specify the desired output format
+
+**Example Query:**
+
+
+
+### Step 3: Review the Response
+
+Code Studio processes your query and returns:
+- Technical explanations with relevant context
+- Executable code snippets with syntax highlighting
+- Best practice recommendations specific to your query
+- References to related concepts when applicable
+
+**Example Response:**
+
+
+
+### Step 4: Apply or Iterate
+
+- **Copy snippets** directly into your editor for immediate use
+- **Refine queries** with follow-up questions to clarify edge cases
+- **Request alternatives** by asking for different approaches or implementations
## Best Practices
-- Write clear and direct questions so Ask Mode understands your request.
-- Add context like code snippets, frameworks, or error messages for accurate answers.
-- Keep questions focused by asking one at a time.
-- Review Ask Mode’s response carefully before applying it to your project.
-- Refine your query or ask follow-up questions if the first answer isn’t enough.
+
+- **Be Specific** - Include framework versions, error codes, and technical constraints in your query
+- **Provide Context** - Share minimal code snippets that demonstrate the problem or requirement
+- **Single-Purpose Queries** - Ask one focused question per prompt for clearer responses
+- **Verify Outputs** - Review generated code for security, performance, and compatibility before integration
+- **Iterate When Needed** - Refine queries with additional details if the initial response is insufficient
+- **Use Proper Terminology** - Technical accuracy in your question improves response quality
+
+## Related Features
+
+- [Edit Mode](/code-studio/features/edit) - Context-aware file editing with workspace integration
+- [Agent Mode](/code-studio/features/agent) - Multi-step task automation with tool execution
diff --git a/code-studio/reference/configure-properties/toolssupport.md b/code-studio/reference/configure-properties/toolssupport.md
index c7d59a8..cb8b855 100644
--- a/code-studio/reference/configure-properties/toolssupport.md
+++ b/code-studio/reference/configure-properties/toolssupport.md
@@ -1,61 +1,66 @@
---
title: Introduction to Tools in Syncfusion Code Studio
-description: Learn how to use the built-in tools in Syncfusion Code Studio to streamline your development workflow and automate tasks efficiently.
+description: Learn how to use the built-in and MCP tools in Syncfusion Code Studio to streamline your development workflow and automate tasks efficiently.
platform: syncfusion-code-studio
-keywords: tools, syncfusion, code-studio, development, automation, workflow, built-in-tools
+keywords: tools, syncfusion, code-studio, development, automation, workflow, built-in-tools, MCP
---
# Tools Support
-## Overview
+## Purpose
-The Tools Support feature in Syncfusion Code Studio empowers developers to perform specific actions within the development environment, such as creating folders, reading files, searching within files, and interacting with browsers. This guide provides a step-by-step approach to use the built-in tools, enabling you to streamline your development workflow and automate tasks efficiently with simple prompts.
+The Tools Support feature in Syncfusion Code Studio empowers developers to perform specific actions within the development environment, such as creating folders, reading files, searching within files, and interacting with browsers. This guide provides a step-by-step approach to use the built-in and MCP tools, enabling you to streamline your development workflow and automate tasks efficiently with simple prompts.
-## Purpose
+## When to use
The tools are designed to automate and simplify common development tasks, allowing you to focus on writing code. Key purposes include:
### 1. File Management
-- Create new files or edit existing ones.
-- Perform bulk search-and-replace operations.
-- Organize project structures efficiently.
+- Create new files or edit existing ones
+- Perform bulk search-and-replace operations
+- Organize project structures efficiently
### 2. Terminal Integration
-- Run CLI commands like npm install or yarn start.
-- Install dependencies or launch development servers.
-- Automate build and deployment processes.
+- Run CLI commands like `npm install` or `yarn start`
+- Install dependencies or launch development servers
+- Automate build and deployment processes
### 3. Code Insights
-- Identify and fix bugs with AI-driven suggestions.
-- Refactor code for better performance or readability.
-- Generate inline documentation automatically.
+- Identify and fix bugs with AI-driven suggestions
+- Refactor code for better performance or readability
+- Generate inline documentation automatically
### 4. Web and Browser Tools
-- Perform web searches to fetch relevant resources.
-- Automate browser tasks like testing or scraping.
+- Perform web searches to fetch relevant resources
+- Automate browser tasks like testing or scraping
+
+## Prerequisites
+
+1. **Code Studio Installation** - Download and configure the IDE: [Install and Configuration](/code-studio/getting-started/install-and-configuration)
+2. **For MCP Tools** - MCP servers must be installed and configured before they can be used. See [MCP Marketplace](/code-studio/reference/configure-properties/mcp/marketplace) for installation instructions.
## Types of Tools
-Syncfusion Code Studio provides a suite of tools to streamline your workflow.You can use two types of tools in chat
+Syncfusion Code Studio provides a suite of tools to streamline your workflow. You can use two types of tools in chat:
### 1. Built-In Tools
-- Built-in tools are automatically available in chat.
-- They cover common development tasks and are optimized for working within your workspace.
-- No installation or configuration is required — they are ready to use as soon as you start.
+- Built-in tools are automatically available in chat.
+- They cover common development tasks and are optimized for working within your workspace.
+- No installation or configuration is required — they are ready to use as soon as you start.
### 2. MCP Tools
-- Model Context Protocol (MCP) is an open standard that enables AI models to use external tools and services through a unified interface.
-- MCP servers provide tools that you can add to Syncfusion Code Studio to extend chat with additional capabilities.
-- To use MCP tools, you must install and configure MCP servers first.
-- MCP servers can run locally on your machine or be hosted remotely.
+- Model Context Protocol (MCP) is an open standard that enables AI models to use external tools and services through a unified interface.
+- MCP servers provide tools that you can add to Syncfusion Code Studio to extend chat with additional capabilities.
+- To use MCP tools, you must install and configure MCP servers first.
+- MCP servers can run locally on your machine or be hosted remotely.
### Toolset Overview
-Below is a list of some tools and their descriptions for reference
+Below is a list of available tools and their descriptions for reference:
-## Tools Approval
+### Tools Approval
When using agents, the agent automatically determines which tools to use from the enabled set based on your prompt and the context of your request. The agent autonomously selects and invokes the relevant tools needed to accomplish the task.
diff --git a/code-studio/tutorials/compare-ai-models.md b/code-studio/tutorials/compare-ai-models.md
index 8ca98c4..5882ce9 100644
--- a/code-studio/tutorials/compare-ai-models.md
+++ b/code-studio/tutorials/compare-ai-models.md
@@ -2,7 +2,7 @@
title: Compare AI models for different tasks
description: Compare AI models in Syncfusion Code Studio and learn when to use Claude, Gemini, and GPT families for coding, debugging, refactoring, and fast iterations with examples.
platform: syncfusion-code-studio
-keywords: "compare ai models, choose model, syncfusion code studio, claude haiku 4.5, claude sonnet 4.5, gemini 2.5 flash, gemini 2.5 pro, gemini 3 flash, gpt-4.1, gpt-5, gpt-5 mini, gpt-5.1 codex, gpt-5.2, code generation, debugging, refactoring, reasoning, low latency"
+keywords: "compare-ai-models, choose-model, claude-haiku-4.5, claude-sonnet-4.5, gemini-2.5-flash, gemini-2.5-pro, gemini-3-flash, gpt-4.1, gpt-5, gpt-5-mini, gpt-5.1-codex, gpt-5.2, code-generation, debugging, refactoring, reasoning, low-latency"
---
# Compare AI models for different tasks
@@ -11,22 +11,27 @@ keywords: "compare ai models, choose model, syncfusion code studio, claude haiku
Code Studio provides access to multiple AI models, each optimized for different kinds of tasks. Some models respond quickly with concise results, while others focus on deeper reasoning, larger context, or code-heavy workflows.
-This guide helps you understand which AI model to use for which type of task, using practical examples and expected outcomes. Instead of experimenting blindly, you can choose a model that best matches your goal.
+This tutorial helps you understand which AI model to use for which type of task, using practical examples and expected outcomes. You will learn to match tasks to models based on complexity, latency requirements, and reasoning depth, enabling you to work more efficiently and produce better code.
+
+## Prerequisites
+
+**Code Studio Installation** - Download and configure the IDE: [Install and Configuration](/code-studio/getting-started/install-and-configuration)
## What You'll Learn
-By the end of this guide, you'll be able to:
+By the end of this tutorial, you will be able to:
-- Understand the strengths of each AI model available in Code Studio
-- Choose the right model based on task complexity
-- Know what kind of output to expect from different models
-- Apply these models confidently in real development scenarios
+- Identify the strengths and optimal use cases for each AI model in Code Studio
+- Choose the right model based on task complexity, latency needs, and reasoning requirements
+- Apply model selection strategies to real development scenarios
+- Recognize when to switch models mid-task for better results
+- Understand the trade-offs between speed, reasoning depth, and context size
-No prior AI expertise is required.
+No prior AI expertise is required. This tutorial focuses on practical decision-making rather than technical model details.
## Model Capabilities with Examples
-Below are common AI models available in Code Studio, explained through realistic tasks, example prompts, and expected results.
+Below are the AI models available in Code Studio, explained through realistic tasks, example prompts, and expected results. Each section explains **why** a particular model fits a specific task type.
### Claude Haiku 4.5
@@ -38,10 +43,13 @@ Below are common AI models available in Code Studio, explained through realistic
- Small utility functions and validation logic
- Boilerplate generation and documentation help
-Claude Haiku 4.5 is well-suited for developers who want quick, reliable answers without unnecessary complexity. It focuses on speed and clarity, making it ideal for lightweight workflows.
+**Why use this model**
+
+Claude Haiku 4.5 prioritizes speed without sacrificing reliability. When you need immediate feedback for straightforward tasks—such as generating standard patterns, writing basic validation, or creating documentation—this model delivers results in seconds. It avoids over-complicating simple requests, making it ideal for maintaining development momentum.
**Example Task**
+```
Create boilerplate code for a REST API endpoint that:
1. Accepts a GET request for /api/products?category=electronics
@@ -49,6 +57,7 @@ Create boilerplate code for a REST API endpoint that:
3. Includes a service layer for business logic
4. Includes proper error handling
5. Returns JSON response
+```
**Example Outcome**
@@ -114,9 +123,9 @@ export interface Product {
}
```
-**Why This Model Fits**
+**When to choose this model**
-Claude Haiku 4.5 excels at straightforward, multi-step coding tasks that do not require deep architectural reasoning. It is particularly effective for boilerplate generation such as scaffolding common patterns, writing validation logic like input checks and type guards, producing clear documentation, and handling routine refactoring tasks such as extracting utilities or improving code readability.
+Use Claude Haiku 4.5 when your task follows established patterns and does not require deep analysis. It excels at scaffolding common structures, writing validation logic with clear input/output contracts, producing readable documentation, and handling routine refactoring tasks where the approach is already known.
### Claude Sonnet 4.5
@@ -128,34 +137,43 @@ Claude Haiku 4.5 excels at straightforward, multi-step coding tasks that do not
- Tasks that benefit from explanation and contextual understanding
- Refactoring or evolving existing codebases
-Claude Sonnet 4.5 is well suited for scenarios where accuracy, clarity, and reasoning are equally important. It performs reliably on tasks that require understanding intent, handling edge cases, and producing code that is easy to read and maintain.
+**Why use this model**
-**Example Task - Deep Reasoning (Step-by-Step Mode)**
+Claude Sonnet 4.5 balances speed with thoughtful analysis. It understands intent, anticipates edge cases, and produces code that is both correct and maintainable. When a task requires more than pattern matching—such as debugging a tricky issue or refactoring legacy code—this model explains its reasoning, making it easier to learn and apply the solution.
+**Example Task**
+
+```
Why is my React component re-rendering unnecessarily?
+```
**Example Outcome**
```
-Let me analyze potential causes:
-1. **Object/Array Recreation**
-- Creating new objects in render causes reference changes
- - Fix: Memoize with useMemo or move outside component
-2. **Inline Function Definitions**
-- New function instances trigger child re-renders
-- Fix: Use useCallback for stable references
-3. **Parent Re-renders**
-- Children re-render when parent does
-- Fix: Wrap child in React.memo()
-4. **Context Value Changes**
-- Context object recreation triggers all consumers
-- Fix: Memoize context value
+Let me analyze potential causes:
+
+1. **Object/Array Recreation**
+ - Creating new objects in render causes reference changes
+ - Fix: Memoize with useMemo or move outside component
+
+2. **Inline Function Definitions**
+ - New function instances trigger child re-renders
+ - Fix: Use useCallback for stable references
+
+3. **Parent Re-renders**
+ - Children re-render when parent does
+ - Fix: Wrap child in React.memo()
+
+4. **Context Value Changes**
+ - Context object recreation triggers all consumers
+ - Fix: Memoize context value
+
Would you like me to review your specific code to identify the exact cause?
```
-**Why This Model Fits**
+**When to choose this model**
-Claude Sonnet 4.5 is designed to handle tasks that require both reasoning and context awareness. It performs well when working across multiple files, refining existing implementations, or translating ideas into structured solutions. The model can switch smoothly between quick answers and more deliberate, step-by-step problem solving, making it a strong choice for development workflows where correctness, explanation, and maintainability all matter.
+Use Claude Sonnet 4.5 when you need a model that can reason across multiple files, refine existing implementations, or translate high-level ideas into structured solutions. It performs well when correctness, explanation quality, and maintainability all matter, making it a strong general-purpose choice for most development workflows.
### Gemini 2.5 Flash
@@ -167,25 +185,25 @@ Claude Sonnet 4.5 is designed to handle tasks that require both reasoning and co
- Simple structural or type definitions
- High-throughput, cost-efficient workflows
-Gemini 2.5 Flash is a strong choice when responsiveness and efficiency matter most. It delivers well-rounded results quickly, making it ideal for fast development cycles and repetitive tasks.
+**Why use this model**
+
+Gemini 2.5 Flash optimizes for speed and efficiency. When you need immediate answers for well-defined tasks—such as writing a regex, defining a type, or generating a small utility—this model responds almost instantly without sacrificing accuracy. It is particularly effective in fast-paced development cycles where quick validation matters.
**Example Task**
+```
Generate a regular expression to validate a simple email address format (e.g., user@domain.com).
+```
**Example Outcome**
-The model produces a concise and correct interface definition with minimal explanation and a very fast turnaround.
-
-**Regex**
-
```
^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$
```
-**Why This Model Fits**
+**When to choose this model**
-Gemini 2.5 Flash is optimized for speed and cost efficiency while still providing reliable reasoning for straightforward tasks. It works especially well for rapid code generation, lightweight modeling, and scenarios where developers want immediate, usable output without the overhead of deep analysis.
+Use Gemini 2.5 Flash for tasks where responsiveness is critical and the solution is straightforward. It works especially well for rapid code generation, lightweight modeling, and scenarios where you want immediate, usable output without the overhead of deep analysis. This model is ideal for high-frequency, low-complexity requests.
### Gemini 2.5 Pro
@@ -197,11 +215,15 @@ Gemini 2.5 Flash is optimized for speed and cost efficiency while still providin
- Complex problem-solving across larger codebases
- Tasks that require careful validation and explanation
-Gemini 2.5 Pro is designed for scenarios where a strong understanding of logic, edge cases, and system behavior is critical. It excels when problems go beyond surface-level fixes and require thoughtful analysis.
+**Why use this model**
+
+Gemini 2.5 Pro excels at analytical tasks that require understanding system behavior and uncovering subtle issues. When a problem goes beyond surface-level fixes—such as identifying race conditions, analyzing performance bottlenecks, or hardening production code—this model provides thorough explanations and robust solutions.
**Example Task**
+```
I have a Node.js application with a suspected concurrency bug. I believe there is a race condition in my ordering logic that allows the inventory count to become negative, which should be impossible. Can you please analyze the following code snippets from inventoryService.js and orderService.js, identify the root cause of the bug, and provide a hardened, production-ready solution to fix it? Please explain the underlying problem and why your solution is robust.
+```
**Codebase Snippet**
@@ -219,7 +241,7 @@ async function updateStockCount(itemId, newCount) {
}
```
-File: inventoryService.js
+File: orderService.js
```javascript
import { getStockCount, updateStockCount } from './inventoryService.js';
@@ -244,11 +266,11 @@ async function placeOrder(itemId) {
**Example Outcome**
-Gemini 2.5 Pro not only identifies the race condition but also provides a robust, production-ready solution and a detailed explanation of the underlying concepts.
+**Identified Root Cause:**
-Identified Root Cause: "The core issue is a non-atomic read-modify-write operation. The getStockCount and updateStockCount functions are executed as separate, non-isolated steps. The time gap between reading the stock and writing the new value allows other concurrent requests to read the same stale data, leading to a race condition."
+"The core issue is a non-atomic read-modify-write operation. The `getStockCount` and `updateStockCount` functions are executed as separate, non-isolated steps. The time gap between reading the stock and writing the new value allows other concurrent requests to read the same stale data, leading to a race condition where multiple orders can decrement from the same starting value."
-Proposed Refactored Code (Using a Mutex Lock):
+**Proposed Refactored Code (Using a Mutex Lock):**
```javascript
import { getStockCount, updateStockCount } from './inventoryService.js';
@@ -278,9 +300,13 @@ async function placeOrder(itemId) {
}
```
-**Why This Model Fits**
+**Why This Solution Works:**
+
+The mutex ensures that only one order can execute the read-modify-write sequence at a time, preventing overlapping operations from corrupting the inventory state. The `finally` block guarantees the lock is released even if an error occurs, preventing deadlocks.
-Gemini 2.5 Pro is well suited for complex reasoning tasks that demand a deep understanding of code behavior and failure modes. It can analyze existing implementations, detect subtle logical issues, and propose robust, well-explained fixes, making it a strong choice for debugging, code improvement, and reliability-focused development workflows.
+**When to choose this model**
+
+Use Gemini 2.5 Pro when you need deep analysis of code behavior, failure modes, or logical correctness. It is well suited for debugging complex issues, improving code reliability, and scenarios where understanding the "why" behind a problem is as important as fixing it.
### Gemini 3 Flash (Preview)
@@ -288,20 +314,22 @@ Gemini 2.5 Pro is well suited for complex reasoning tasks that demand a deep und
- Rapid experimentation and prototyping
- Quick feedback during early development stages
-- Iterating on evolving or exploring tasks
+- Iterating on evolving or exploratory tasks
- Low-latency responses with configurable reasoning depth
- Testing new model capabilities with minimal setup
-Gemini 3 Flash (Preview) is designed to combine fast response times with stronger reasoning than earlier Flash models, making it well suited for experimentation and agent-style workflows where responsiveness still matters.
+**Why use this model**
+
+Gemini 3 Flash (Preview) combines fast response times with stronger reasoning than earlier Flash models. It is designed for experimentation, allowing you to quickly test ideas, iterate on agent-driven workflows, and explore new features without sacrificing responsiveness or incurring unnecessary overhead.
**Example Task**
+```
Find all customers who bought 'Product A' in 2023 but never purchased 'Product B', then calculate their total lifetime spend.
+```
**Example Outcome**
-The model responds quickly with a concise and correct implementation, providing minimal explanation and immediately usable code.
-
```sql
SELECT c.customer_id, SUM(o.total_amount) as lifetime_spend
FROM Customers c
@@ -315,9 +343,9 @@ AND c.customer_id NOT IN (
GROUP BY c.customer_id;
```
-**Why This Model Fits**
+**When to choose this model**
-Gemini 3 Flash (Preview) balances speed, efficiency, and improved reasoning, allowing developers to control how much analysis the model performs based on the task. This makes it useful for quickly testing ideas, iterating on agent-driven workflows, and exploring new features without sacrificing responsiveness or incurring unnecessary overhead.
+Use Gemini 3 Flash (Preview) when you want to quickly test ideas, iterate on agent-driven workflows, or explore new features. This model balances speed with improved reasoning, making it useful for prototyping and early-stage development where responsiveness still matters.
### GPT-4.1
@@ -329,11 +357,15 @@ Gemini 3 Flash (Preview) balances speed, efficiency, and improved reasoning, all
- Stable, predictable, and repeatable outputs
- Documentation, learning, and day-to-day development work
-GPT-4.1 is a dependable model for developers who value consistency and clarity. It performs especially well when tasks require understanding broader context or producing well-explained solutions.
+**Why use this model**
+
+GPT-4.1 delivers consistent, well-explained results across a wide range of tasks. When you need a model that can understand large context, follow complex instructions, and produce predictable outputs, GPT-4.1 is a dependable choice. It excels at providing clear explanations, making it ideal for learning and documentation.
**Example Task**
+```
Refactor a C# class to use dependency injection, explain the changes, and update all usages across multiple files.
+```
```csharp
public class EmailService {
@@ -345,19 +377,27 @@ public class EmailService {
GPT-4.1 analyzes the workspace, identifies all usages of the class, refactors the constructor to accept dependencies, updates all instantiations, and provides a clear explanation of the changes.
+**Refactored Class:**
+
```csharp
public class EmailService : IEmailService {
private readonly ISmtpClient _smtpClient;
+
public EmailService(ISmtpClient smtpClient) {
_smtpClient = smtpClient;
}
+
public void Send(string to, string message) { /* ... */ }
}
```
-**Why This Model Fits**
+**Explanation:**
+
+"The refactoring introduces constructor-based dependency injection by accepting `ISmtpClient` as a parameter. This decouples `EmailService` from concrete SMTP implementations, making the class testable and allowing runtime configuration. All instantiations across the codebase have been updated to pass the dependency through the constructor."
-GPT-4.1 excels at following instructions accurately and reasoning over large inputs, making it well suited for learning materials, documentation, and general software engineering tasks. Its consistent behavior and strong coding performance make it a solid choice when correctness, explanation quality, and predictability are important.
+**When to choose this model**
+
+Use GPT-4.1 when you value consistency, clarity, and predictability. It performs especially well on tasks that require understanding broader context, producing well-explained solutions, and maintaining stable behavior across similar requests. This makes it a solid choice for learning materials, documentation, and general software engineering tasks.
### GPT-5
@@ -369,11 +409,17 @@ GPT-4.1 excels at following instructions accurately and reasoning over large inp
- Cross-cutting concerns that span multiple layers or modules
- Tasks that benefit from broad context awareness
-GPT-5 is well suited for scenarios where understanding the full system context is essential. It can reason across files, components, and responsibilities to propose thoughtful, scalable solutions.
+**Why use this model**
+
+GPT-5 understands full system context and can reason across files, components, and responsibilities to propose thoughtful, scalable solutions. When a change affects multiple parts of a system—such as migrating frameworks, redesigning architecture, or implementing cross-cutting features—this model analyzes dependencies, proposes coherent plans, and executes multi-step changes reliably.
**Example Task**
-Migrate a Node.js/TypeScript API from Express to Fastify, add runtime validation and tracing, update all route usages, generate load tests, and produce a rollback plan.
+```
+Migrate a Node.js API from Express to Fastify, add runtime validation and tracing, update all route usages, generate load tests, and produce a rollback plan.
+```
+
+**Existing Code:**
```javascript
import express from 'express';
@@ -399,7 +445,7 @@ app.listen(3000, () => console.log('up on 3000'));
**Example Outcome**
-The model proposes a layered or modular architecture, explains the reasoning behind the design choices, and introduces maintainable patterns that improve separation of concerns and long-term scalability.
+**Step 1: Set up tracing infrastructure**
```javascript
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
@@ -413,27 +459,39 @@ provider.register();
export const tracer = trace.getTracer('api');
```
-
-```js
+
+**Step 2: Generate load tests**
+
+```javascript
import http from 'k6/http';
import { check, sleep } from 'k6';
-export const options = { vus: 50, duration: '30s', thresholds: { http_req_duration: ['p(95)<200'] } };
+export const options = {
+ vus: 50,
+ duration: '30s',
+ thresholds: { http_req_duration: ['p(95)<200'] }
+};
export default function () {
const payload = JSON.stringify({
userId: '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed',
items: [{ sku: 'SKU-123', qty: 1 }]
});
- const res = http.post('http://localhost:3000/orders', payload, { headers: { 'Content-Type': 'application/json' } });
+ const res = http.post('http://localhost:3000/orders', payload, {
+ headers: { 'Content-Type': 'application/json' }
+ });
check(res, { 'status is 200': r => r.status === 200 });
sleep(1);
}
```
-**Why This Model Fits**
+**Why This Solution Works:**
+
+GPT-5 plans the migration systematically: setting up observability first to monitor the transition, migrating the framework with validation to prevent runtime errors, updating all route handlers consistently, generating tests to verify behavior, and providing rollback guidance to minimize risk. This approach ensures a high-confidence, production-grade migration.
+
+**When to choose this model**
-GPT-5 plans and executes multi-file, multi-step changes, keeps types and validations consistent, injects observability, generates tests, and provides verification plus rollback guidance—suited for high-confidence, production-grade migrations.
+Use GPT-5 when a task requires understanding and coordinating changes across multiple parts of a system. It is well suited for architectural decisions, large-scale refactoring, and scenarios where planning, execution, and verification must all be handled thoughtfully.
### GPT-5 Mini
@@ -445,22 +503,26 @@ GPT-5 plans and executes multi-file, multi-step changes, keeps types and validat
- Generating examples, small scripts, and prompt engineering
- Low-latency assistant in IDE workflows
-GPT-5 Mini is designed for efficiency at scale, delivering strong general-purpose reasoning and coding accuracy while keeping response times and compute costs low.
+**Why use this model**
+
+GPT-5 Mini optimizes for efficiency at scale, delivering strong general-purpose reasoning and coding accuracy while keeping response times and compute costs low. When you need reliable results quickly—such as writing a small API endpoint, generating a test, or fixing a straightforward bug—this model provides fast, accurate output without unnecessary overhead.
**Example Task**
+```
Create a small FastAPI endpoint POST /summarize that validates input, caches results, and returns a concise summary plus metadata.
+```
+
+**Existing Code:**
```python
-A single helper with no validation or caching:
+# A single helper with no validation or caching:
def summarize_text(text):
- return external_llm_call(text)
+ return external_llm_call(text)
```
**Example Outcome**
-The model produces a simple, correct type definition with minimal explanation and a fast response, making it immediately usable in production code.
-
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, constr
@@ -489,15 +551,18 @@ def summarize(req: SummarizeRequest):
except RuntimeError:
raise HTTPException(status_code=502, detail="Summarization service unavailable")
return {"summary": summary, "length": len(req.text), "cached": True}
-
-Sample request/response
-Request: {"text":"Long article ..."}
-Response: {"summary":"Concise summary.","length":1234,"cached":false}
```
-**Why This Model Fits**
+**Sample Request/Response:**
+
+```
+Request: {"text": "Long article ..."}
+Response: {"summary": "Concise summary.", "length": 1234, "cached": false}
+```
+
+**When to choose this model**
-GPT-5 Mini is optimized for scenarios where speed, reliability, and cost control matter most. It performs exceptionally well on straightforward coding tasks, structured outputs, and repetitive development workflows, while maintaining a very low error and hallucination rate. This makes it a strong choice for high-volume systems, everyday developer assistance, and production environments that need consistent results without the overhead of deeper reasoning models.
+Use GPT-5 Mini for straightforward coding tasks where speed and reliability matter most. It performs exceptionally well on structured outputs, repetitive workflows, and production environments that need consistent results without the overhead of deeper reasoning models. This makes it ideal for high-volume systems and everyday developer assistance.
### GPT-5.1 Codex
@@ -509,19 +574,27 @@ GPT-5 Mini is optimized for scenarios where speed, reliability, and cost control
- Structured code reviews and consistency improvements
- Automation-friendly and agent-driven coding workflows
-GPT-5.1 Codex is purpose-built for software engineering workflows, handling everything from quick code cleanups to more involved refactoring tasks with a strong focus on correctness and consistency.
+**Why use this model**
+
+GPT-5.1 Codex is purpose-built for software engineering workflows. It handles everything from quick code cleanups to more involved refactoring tasks with a strong focus on correctness, consistency, and adherence to best practices. When you need code that follows established patterns and conventions reliably, this model delivers.
**Example Task**
+```
Migrate a large TypeScript/React data grid to server-driven pagination, update the Node.js API, add end-to-end Playwright tests, and summarize the impact for release notes.
+```
**Example Outcome**
-GPT-5.1 Codex inspects the client components, extracts shared query logic, introduces a typed pagination hook, updates API routes and validation, rewrites affected Redux slices, adds Playwright coverage for pagination edge cases, verifies CI scripts, and provides a crisp deployment checklist with regression risks.
+GPT-5.1 Codex inspects the client components, extracts shared query logic, introduces a typed pagination hook, updates API routes and validation, rewrites affected Redux slices, adds Playwright coverage for pagination edge cases, verifies CI scripts, and provides a deployment checklist with regression risks.
+
+**Why This Solution Works:**
-**Why This Model Fits**
+The model ensures type safety across the stack, maintains consistent error handling, and generates comprehensive tests to verify the migration. By producing a deployment checklist and identifying regression risks, it reduces the likelihood of production issues.
-GPT-5.1 Codex blends high-context understanding with reliable multi-language edits, making it ideal for sophisticated refactors that span front-end, back-end, and testing layers while keeping explanations tight and actionable.
+**When to choose this model**
+
+Use GPT-5.1 Codex when you need high-context understanding combined with reliable multi-language edits. It is ideal for sophisticated refactors that span front-end, back-end, and testing layers while keeping explanations tight and actionable.
### GPT-5.2
@@ -531,22 +604,26 @@ GPT-5.1 Codex blends high-context understanding with reliable multi-language edi
- Production-quality code and robust design patterns
- Long, multi-step workflows and agent-driven tasks
- Large-context analysis across files, services, or documents
-- High confidence debugging, reviews, and refactoring
+- High-confidence debugging, reviews, and refactoring
+
+**Why use this model**
-GPT-5.2 is designed for demanding engineering and knowledge-work scenarios where depth, reliability, and consistency are critical.
+GPT-5.2 is designed for demanding engineering scenarios where depth, reliability, and consistency are critical. It can analyze complex workflows end-to-end, apply best-practice coding patterns, and support autonomous or tool-assisted development scenarios. When a task requires careful reasoning, long-context awareness, and dependable outcomes, this model delivers production-grade results.
**Example Task**
+```
Migrate a Node.js Express API endpoint to be idempotent and race-safe by introducing:
- an idempotency key header,
- a database uniqueness constraint,
- consistent error handling, and
- Integration tests proving duplicate requests don't create duplicate rows.
+```
**Example Outcome**
-The model applies structured error-handling patterns, improves readability and flow, and produces code suitable for real-world, production use.
+**Refactored Code:**
```javascript
import express from "express";
@@ -583,17 +660,36 @@ app.post("/orders", async (req, res) => {
});
```
-**Why This Model Fits**
+**Why This Solution Works:**
+
+The idempotency key header combined with a database uniqueness constraint ensures that duplicate requests with the same key return the existing order rather than creating a new one. The error handling distinguishes between constraint violations (expected, idempotent behavior) and genuine errors, making the endpoint safe for retries.
-GPT-5.2 excels at tasks that require careful reasoning, long-context awareness, and dependable outcomes. It can analyze complex workflows end-to-end, apply best-practice coding patterns, and support autonomous or tool-assisted development scenarios. This makes it a strong choice for production-grade engineering work where correctness, clarity, and robustness matter most.
+**When to choose this model**
+
+Use GPT-5.2 for tasks that require careful reasoning, long-context awareness, and dependable outcomes. It excels at production-grade engineering work where correctness, clarity, and robustness matter most, making it ideal for high-stakes refactoring, debugging, and system-wide improvements.
+
+## Verify Your Learning
+
+To confirm you understand how to choose the right model:
+
+1. **Quick Test**: Try generating a simple TypeScript interface using both Claude Haiku 4.5 and Gemini 2.5 Flash. Notice the response speed and output quality—both should be fast and correct.
+
+2. **Debug a Real Issue**: Use Claude Sonnet 4.5 or Gemini 2.5 Pro to analyze a bug in your current project. Observe how the model explains the root cause and proposes a fix.
+
+3. **Compare Complexity Handling**: Give GPT-5 or GPT-5.2 a multi-file refactoring task. Notice how it plans the changes, maintains consistency across files, and provides verification steps.
+
+You have successfully completed this tutorial when you can:
+
+- Pick the appropriate model for a given task without trial and error
+- Explain why a particular model is better suited for a specific scenario
+- Recognize when a task requires switching from a faster model to a reasoning-focused one
## Next Steps
-Now that you understand how different AI models behave:
+Now that you understand how different AI models behave and when to use them:
+
+1. **Explore Advanced Features**: Try using the [Agent](/code-studio/features/agent) feature with reasoning-focused models like GPT-5 or GPT-5.2 for autonomous multi-step workflows.
-- Use faster models for simple or repetitive tasks
-- Switch to reasoning-focused models for debugging and refactoring
-- Choose deeper models for complex or high-risk changes
-- Refine prompts to get more accurate and helpful results
+2. **Configure Custom Models**: If your organization has specific model preferences, learn how to [configure custom models](/code-studio/reference/configure-properties/configure-opensource-model) to align with your team's needs.
-As task complexity increases, adjusting the model choice can significantly improve both productivity and code quality.
\ No newline at end of file
+As task complexity increases, adjusting your model choice can significantly improve both productivity and code quality.