Claude Sonnet 4.5 is the latest coding-optimized model from Anthropic, hailed as “the best coding model in the world”. It delivers state-of-the-art performance on software engineering benchmarks and excels at multi-step reasoning and coding tasks. For developers, integrating Claude Sonnet 4.5 into Visual Studio Code (VS Code) can supercharge productivity. It offers a massive 200K token context window for understanding large codebases and exhibits superior code quality and secure coding practices. In practical terms, teams using Claude report dramatic boosts in development velocity and code accuracy.
This guide provides a comprehensive, step-by-step tutorial on setting up Claude Sonnet 4.5 in VS Code and implementing prompt checkpoints and rollback mechanisms. We will cover everything from prerequisites and API integration to saving prompt history, reverting code changes, debugging, and best practices. By the end, you’ll have a robust workflow for harnessing “Claude Sonnet 4.5 with VS Code” to accelerate your coding workflows.
Prerequisites
Before integrating Claude Sonnet 4.5 with VS Code, ensure you have the following prerequisites in place:
- Anthropic API Access: An Anthropic account with access to the Claude API (Pro/Max or relevant plan) and an API key. Sign up via the Anthropic Console and generate an API key in the API Keys section. You’ll need this key to authenticate your calls to Claude Sonnet 4.5.
- Development Environment (Node.js or Python): A working Node.js or Python environment for running API calls or scripts. Claude’s API can be invoked via REST calls or using official SDKs in multiple languages (Anthropic provides client libraries for Python, TypeScript/Node, etc.). Ensure you have Node.js (v16+) or Python (3.8+) installed, along with any package manager (npm/pip) for installing SDKs.
- VS Code and Extensions: Visual Studio Code installed (v1.98.0 or higher). We recommend installing helpful extensions to streamline integration:
- REST Client (by Huachao Mao): Allows you to send HTTP requests directly from VS Code. This is useful for testing Claude’s API endpoints with your API key.
- Code Runner (by Jun Han): Enables running code snippets or scripts (Node/Python) quickly in the VS Code editor.
- (Optional) Anthropic Claude VS Code Extension: Anthropic provides an official Claude Code extension (beta) that brings Claude into a VS Code sidebar. While we focus on custom integration and scripting, the extension is another way to use Claude in VS Code (requiring an Anthropic subscription). It supports inline diffs and real-time code suggestions in the editor.
With these prerequisites ready – an API key, a scripting runtime, and a prepared VS Code – you can proceed to set up the integration.
Setting Up Sonnet 4.5 API Integration in VS Code
There are two main ways to integrate Claude Sonnet 4.5 into VS Code: using direct API calls (via REST or SDK) or leveraging the official extension. Here we focus on the API approach for transparency and control, but note that the official extension can simplify usage (if you have Claude Code set up).
1. Configure API Key Securely: First, store your Anthropic API key securely. Do not hardcode the key in scripts. Instead, use environment variables or VS Code’s secret storage:
- On your system, set an environment variable
ANTHROPIC_API_KEYwith your key (the official SDKs will auto-read this). For example, in Linux/macOS add to~/.bashrc:export ANTHROPIC_API_KEY="xys...yourkey...". - Alternatively, use a
.envfile in your project and load it in your script (with a library likepython-dotenvordotenvfor Node). - If using the REST Client extension, you can define an environment variable within VS Code’s REST Client settings or use a placeholder in your
.httprequest file.
2. Install Anthropic SDK (Optional): Anthropic offers official SDKs to simplify API calls. You can install these for your language:
Python: Run pip install anthropic. Then initialize the client in your script:
import anthropic
client = anthropic.Anthropic() # reads API key from environment by default
Using the SDK allows calling the Messages API easily. For example, a basic request:
response = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=300,
messages=[{"role": "user", "content": "Hello, Claude"}]
)
print(response.content)
This sends a prompt to Claude Sonnet 4.5 and prints the assistant’s reply.
Node.js/TypeScript: Use the official package with npm install @anthropic-ai/sdk. For example:
const Anthropic = require('@anthropic-ai/sdk');
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const res = await client.messages.create({
model: "claude-sonnet-4-5",
messages: [{ role: "user", content: "Hello, Claude" }],
max_tokens: 300
});
console.log(res.data); // response object
The SDK handles HTTP calls under the hood. (Note: you can also use fetch/axios to call the REST endpoint directly if you prefer not to use the SDK.)
3. Test with REST Client (optional): As an alternative to writing a script, you can create a VS Code .http file for quick testing. For example, create a file claude-test.http with the following content:
POST https://api.anthropic.com/v1/complete HTTP/1.1
Authorization: Bearer {{API_KEY}}
Content-Type: application/json
{
"model": "claude-sonnet-4-5",
"max_tokens": 100,
"messages": [
{"role": "user", "content": "What is 2 + 2?"}
]
}
Replace {{API_KEY}} with your key or configure it in the REST Client env. Send the request (click “Send Request” in VS Code) and you should receive a JSON with Claude’s completion. This confirms your API access is working.
4. Integrate with VS Code Workflow: Decide how you want to prompt Claude during development:
- Script approach: You can write a script (Python or Node) that reads your prompt (perhaps from a file or input), calls the Claude API, and outputs the completion. With the Code Runner extension, you can execute this script with a single click or keystroke inside VS Code whenever you want AI assistance.
- Extension approach: If you installed the Claude Code for VS Code extension, log in with your Anthropic account in VS Code. The extension runs Claude as a side-panel where you can chat and apply code changes directly. (The extension is essentially a GUI for Claude’s coding assistant—under the hood it uses the same Claude Sonnet 4.5 model and API.)
Sample API Request Structure: Whether using the SDK or raw HTTP, the request structure for Claude is similar to OpenAI’s chat API. You provide:
- The
modelname (use"claude-sonnet-4-5"for the latest Sonnet 4.5). - A list of
messages, each with aroleandcontent. Roles are typically"user"for your queries and"assistant"for Claude’s responses. (There is also an optional"system"role for instructions to guide the assistant.) - Parameters like
max_tokens(the max tokens Claude should generate),temperature(to adjust randomness), etc. For coding, you might keeptemperaturefairly low (e.g. 0-0.5) to get more deterministic results, and usestop_sequencesif needed to end output at certain delimiters.
Authenticating in VS Code: Ensure your API key is not exposed in your source code. Use environment variables which VS Code can access when running your scripts. For example, if using a Python script via Code Runner, you may need to launch VS Code from a terminal where ANTHROPIC_API_KEY is set, so the environment passes through. If using Node, similarly ensure process.env.ANTHROPIC_API_KEY is set in the environment. Some developers store keys in VS Code’s user settings (not recommended for API keys) or use secrets management. The bottom line: treat your Anthropic key like a password – keep it secure and out of version control.
Prompting from VS Code
With integration set up, you can now send prompts to Claude directly from VS Code. Here are some strategies for effective prompting within your IDE:
- Chat via Script or Extension: If using a custom script, you might create a simple REPL or command function. For example, a Python script could read your input from the terminal or from a file and then call
client.messages.create(). You can open an integrated terminal in VS Code and runpython chat_with_claude.pyto have a quick dialogue. The official VS Code extension provides a chat sidebar – you type a request and Claude’s reply appears in the panel, with options to apply code changes. Either way, the experience is like having Claude as an AI pair programmer inside VS Code. - Creating and Running Code Prompts: If you want Claude to generate or modify code, formulate your prompt clearly and provide context. For instance, you can open a file (say
app.js), select a block of code, and ask Claude (via the extension or by feeding that code into your prompt) something like: “Optimize this function for performance” or “Find bugs in the selected code”. In a script-based workflow, you might write the prompt in a text file (e.g.,prompt.txt) and have your script read that and send to Claude. Using VS Code tasks or keybindings, you can automate sending the content of the current file/selection to your script. The goal is to make triggering Claude as frictionless as running a build or test. - Contextual Chaining (Using History): Claude does not maintain conversational context between API calls unless you supply the prior messages each time (the conversation is stateless across calls). To have multi-turn interactions or follow-ups, you need to store previous responses and include them in the next prompt’s
messages. For example, suppose you ask Claude to write a function and it responds with code. Next, you want to refine that code. You should call the API again with amessageslist that includes:- User message 1: your first request,
- Assistant message 1: Claude’s first code answer,
- User message 2: your follow-up prompt (e.g., “Now optimize this code for speed”).
Claude will then produce Assistant message 2 as the new reply, taking into account the prior Q&A context. By chaining messages like this, you achieve continuity (just as you would in a chat thread).
- Storing Responses: In practice, you might want to save Claude’s replies for later reference. You can do this manually (copy-paste the output to a file) or automatically:
- If using the SDK in a script, append each interaction to a list or log file. For example, maintain a list
conversation = []. After eachclient.messages.createcall, doconversation.append({"role": "assistant", "content": response.content}). You can even dump this list as JSON to a file to resume later. - The VS Code extension automatically keeps a history of the conversation in the sidebar (and the terminal interface has a searchable prompt history using <kbd>Ctrl+R</kbd>). If using the extension, you can scroll up to see previous prompts/answers or copy them into your documents.
- If using the SDK in a script, append each interaction to a list or log file. For example, maintain a list
- Using VS Code REST Client for prompts: Another neat trick is writing multiple requests in a single
.httpfile for different scenarios. For example, one request could be a prompt to generate code, another to refactor code. You can run them as needed. This method makes each prompt a “reusable snippet” that you can tweak and re-run, and it keeps a record of what prompts you have tried (the file itself serves as documentation of prompt experiments).
By embedding Claude Sonnet 4.5 into VS Code’s workflow, you can seamlessly ask coding questions, generate new code, and iteratively refine it without switching context to a separate chat app. Next, we’ll explore how to implement checkpointing and rollback so you can manage Claude’s contributions safely.
Checkpointing & Rollback
One of the powerful new features of Claude Sonnet 4.5 (especially in Claude Code) is checkpoints, which allow you to save progress and roll back to previous states instantly. When integrating Claude with VS Code, we can design our own lightweight checkpoint system to capture code changes or prompt states at various milestones. This prevents AI-generated changes from ever causing irrecoverable edits – you can always revert if needed.
What Are Checkpoints? In Anthropic’s Claude Code context, checkpoints automatically save your code before each Claude-driven change, and allow instant rewind to earlier versions. For instance, if Claude refactors a file, a checkpoint of the file’s original state is saved. If you dislike the changes, you hit undo (Esc Esc or a command) to revert to the saved state. You can choose to restore just the code, just the conversation, or both. This concept is essentially version control for AI actions.
Designing Prompt Checkpoints in VS Code: To emulate this, you can use a combination of local file storage and memory tracking:
- Saving Code Snapshots: Each time before Claude writes or modifies code in a file, save the current version of that file. This could be as simple as copying the file to a
.bakfile or committing the changes in Git (more on that in Best Practices). For example, if you ask Claude to implement a function inutils.py, manually saveutils.pyasutils_before_claude.py(or stage a git commit) before applying Claude’s suggestion. Many diff/merge tools or even VS Code’s Timeline can help manage these snapshots. - Saving Prompt/Response Pairs: If you’re in a multi-turn session, consider logging each prompt and response as a checkpoint in the conversation. For instance, after each
client.messages.createcall, append the pair to a conversation log file (e.g.,conversation_log.md). Mark sections as “Checkpoint 1”, “Checkpoint 2”, etc., so you can identify conversation states. This is helpful if you want to fork the conversation – you can go back to an earlier checkpoint and branch a different line of inquiry without redoing everything.
Techniques for Comparing and Reverting:
- Diff Tools: Use VS Code’s built-in diff viewer to compare Claude’s changes. If you saved a pre-change file and the post-change file, open one on the left and one on the right in VS Code to see exactly what Claude modified. The Claude VS Code extension actually shows inline diffs in real-time for Claude’s edits. In a manual setup, you can simulate this by saving versions and diffing them yourself.
- Revert via File Replace: To rollback, replace the current file with the backup from the desired checkpoint. For instance, if
version1.pywas saved, and after some Claude edits you haveversion2.py, to rollback simply copyversion1.pyoverversion2.py. If using Git, you could dogit restore <filename> --source=<commit-hash>to restore from a specific commit. - Conversation Rollback: If you want to rollback a conversation (prompt sequence) to an earlier point, you can truncate your
messageslist. For example, if you had messages [User1, Assistant1, User2, Assistant2] and you feel the conversation went off track at Assistant2, you can “undo” by discarding the last two entries and ask a new question continuing from Assistant1’s context. Essentially, treat the earlier conversation state as a new checkpoint to resume from.
Code Snippet – Implementing a Simple Checkpoint System: Below is a conceptual Python snippet that demonstrates saving checkpoints and rolling back. This assumes Claude is generating code and we want to preserve older versions:
import anthropic, shutil
client = anthropic.Anthropic() # API key from env
conversation = [] # to store message history
# Function to send prompt and get Claude's reply
def ask_claude(prompt):
conversation.append({"role": "user", "content": prompt})
response = client.messages.create(model="claude-sonnet-4-5", max_tokens=500, messages=conversation)
conversation.append({"role": "assistant", "content": response.content})
return response.content
# Example usage:
code_file = "utils.py"
# 1. Save checkpoint of the code before asking Claude to modify it
shutil.copy(code_file, f"{code_file}.checkpoint1") # backup current code
answer = ask_claude("Optimize the function in utils.py for speed and clarity.")
# Write Claude's answer (code) into the file
with open(code_file, "w") as f:
f.write(answer)
# ... (later, if we want to rollback changes) ...
shutil.copy(f"{code_file}.checkpoint1", code_file) # restore the original version
conversation = conversation[:-2] # remove the last user question and assistant answer from history
print("Rolled back to checkpoint1. Original code restored.")
In the above approach, we explicitly saved utils.py to utils.py.checkpoint1 before applying Claude’s suggestion, then restored it to rollback. We also removed the last interaction from the conversation log so that Claude’s next responses won’t include the reverted prompt context. You could extend this idea by numbering checkpoints (checkpoint2, checkpoint3, etc.) or even by timestamp (e.g. utils.py.2025-10-28_08-40.backup).
Note: Checkpoints capture Claude’s edits, not your own manual edits. If you modify code manually, those changes should still be managed with traditional version control. The checkpoint system is meant to safeguard AI-generated changes since those might be exploratory. It’s wise to use checkpoints in combination with Git for a safety net. Checkpointing gives you quick undo ability, while Git gives you long-term change history.
Debugging & Logging
When building a Claude integration, you’ll inevitably encounter errors or performance issues. Proper debugging and logging practices will save you time. Here are some tips:
- Handle API Errors Gracefully: The Claude API returns standard HTTP error codes for issues. For example, a 401 indicates an authentication error (check your API key), 429 means you hit a rate limit, 500 is a server-side error, etc.. Code-wise, wrap your API calls in try/except (Python) or promise catch (JavaScript) to catch exceptions. Log the error type and message. The error response includes a message and a unique request ID, which you can log for reference. For instance, if using the Python SDK, you can access
error = e.response.json()in an exception to see details, or use the_request_idproperty of the response object for debugging. In case of rate limits, implement an exponential backoff (wait and retry after a delay) or reduce prompt frequency. - Monitor Latency and Performance: Claude Sonnet 4.5 is fast for a large model (often responding in under a second for small prompts), but responses can slow down with very large contexts or outputs. If you notice slow responses, consider enabling streaming (Anthropic API supports Server-Sent Events streaming if you set
stream=Truein the request). Streaming allows partial output to arrive sooner, improving perceived latency. You can also log the time taken for each API call (record a timestamp before and after the request) to track performance over time. If certain prompts consistently take long, they might be hitting the token limit or doing heavy reasoning – consider breaking them into smaller sub-tasks. - Logging Inputs and Outputs: Maintain a log of all interactions for auditability. This can be as simple as appending to a text file or as structured as writing JSON lines. A log entry might include: timestamp, prompt (perhaps truncated if long), and Claude’s response. Logging is invaluable for:
- Audit Trail: Later reviewing what was asked and answered, especially if an AI-introduced change caused a bug. You can pinpoint which prompt led to that code.Rollback Reference: Even with checkpoints, a log of Claude’s outputs means you have every version of the code it generated. If you decide a week later that an earlier solution was better, you can retrieve it from the logs.Error Diagnosis: If Claude gave an irrelevant or incorrect answer, you can examine the logged prompt to see if the fault was in the question context.
- VS Code Integrated Logging: If you prefer viewing logs inside VS Code, you could create an Output Channel. For example, if writing a VS Code extension or script that runs via tasks, direct the logs to a dedicated output panel (using VS Code API or simply outputting to the terminal). This way you don’t have to leave the editor to check logs.
- Common Issues: Some typical hiccups include:
- “Prompt is too long” errors: This means your message array or content exceeded the model’s context length or request size. Claude 4.5 has a very large context (200k tokens) so it’s usually not an issue, but the API has a 32 MB request size cap. If you hit limits, consider summarizing or chunking the input.
- Malformed JSON or requests: The API expects well-formed JSON. If using REST Client, a missing quote or brace will cause a 400 invalid request. Always double-check your JSON structure (some REST clients highlight syntax errors).
- API key scope issues: Ensure your key has access to the Claude 4.5 model. If you get a permission error (403), your key might be limited to certain models or needs an upgraded plan.
- Network issues: If the response is not coming back, check your internet connection and any corporate firewall that might block external API calls. The Anthropic endpoint is HTTPS on standard port 443, so it typically works anywhere web traffic is allowed.
By proactively logging and handling errors, you’ll create a robust integration that can be used in critical development workflows without fear of silent failures or lost information.
JetBrains IDE Note
Not a VS Code user? You can achieve a similar setup in JetBrains IDEs (IntelliJ IDEA, PyCharm, WebStorm, etc.) with minimal tweaks:
- HTTP Client in JetBrains: JetBrains IDEs have a built-in HTTP client similar to VS Code’s REST Client. You can create an HTTP request file (
.httpor.restfile) in your project and write the Claude API calls there. For example, the same request we showed earlier can be used. Hitting the Send icon will display the response right in the editor. This is great for quickly testing prompts from within the IDE. - Terminal & Scripting: All JetBrains IDEs have an integrated terminal. You can run your Node/Python scripts for Claude prompting directly in that terminal (just like you would in VS Code’s). This means the CLI approach to call Claude via a script works identically in JetBrains – you might even create a custom run configuration for it.
- JetBrains Plugins: The CodeGPT plugin (third-party) supports Anthropic Claude integration in JetBrains as well. According to its documentation, you can install the plugin, then select Claude Sonnet 4.5 as the model and input your Anthropic API key to use Claude inside the IDE chat or autocomplete. Similarly, Anthropic’s own CLI Claude Code can be used in JetBrains by running it in terminal window (though the official extension is VS Code-specific at the moment).
- Workflow Parity: The checkpoint and rollback principles apply the same way. If using JetBrains, you might rely even more on Git integration (since JetBrains has robust version control UI) to snapshot and revert changes. And you can keep a conversation log in a scratch file or comment blocks.
In short, Claude Sonnet 4.5 integration isn’t limited to VS Code – any environment where you can make web requests or run a script can leverage Claude. Whether you use VS Code, PyCharm, or even a simple Jupyter notebook, the underlying steps of calling the API and managing prompt history remain the same.
Best Practices
To get the most out of Claude Sonnet 4.5 in your coding workflow, keep these best practices in mind:
- Prompt Versioning & History: Treat your prompts and Claude’s responses as valuable project artifacts. Use version control to manage them when appropriate. For instance, you can store frequently used prompts or instructions in a
prompts.mdfile in your repo. Anthropic’s CLI usesCLAUDE.mdfiles for persistent context and recommends checking them into git to share with your team. You could adopt a similar approach: maintain a markdown file with prompt tips or an FAQ for your codebase that Claude should know. Also, if you develop custom prompt templates or slash commands, store them in a shared location (Anthropic suggests using a.claude/commandsfolder under version control for teams). This ensures all developers on the project benefit from improved prompts and can reproduce results. - Limit Token Usage (Cost-Efficient Prompting): Claude Sonnet 4.5 has the same pricing as its predecessor ($3 per million input tokens, $15 per million output tokens). While it’s capable of 200K-token contexts, you should avoid sending extremely large prompts unless necessary. Some tips to reduce token usage:
- Provide only the relevant code snippet to Claude rather than an entire file (if you’re asking about a specific function).
- Utilize the
systemrole to give high-level guidance or context, instead of restating it in every user prompt. - When Claude’s output is verbose, you can set a reasonable
max_tokensto cap it. If you only need a summary or a short fix, don’t allow a 5,000-token essay. - Clean up your conversation context periodically. If earlier messages are no longer needed, you can drop them to save tokens in subsequent calls. Remember, it’s you who controls what context to send each time.
- Use the
stop_sequencesto cut off unnecessary continuations (e.g., if you ask for code, you might provide a stop sequence like\"```\"to stop when the code block ends). - Monitor your usage via Anthropic’s dashboard or usage API to understand cost. If integrating deeply, you might build in a simple token counter for each prompt/response and display it, so you’re aware of the cost impact in real time.
- Quality Prompting & Iteration: To get the best results, write clear and specific prompts. For coding tasks, include details like programming language, libraries, or a brief description of the function’s purpose. If the answer isn’t good on first attempt, try iterating: perhaps you need to add, “Don’t make any changes to X part,” or “Follow the project’s coding style in the answer.” Claude is quite adept at following nuanced instructions when given. So refine prompts rather than settling for a subpar answer.
- Leverage Plan Mode / Task Lists: Claude Sonnet 4.5 (especially via Claude Code) can break down large tasks into sub-tasks (often called “Plan Mode”). If you have a complex request (“build me a small app”), it might be useful to prompt Claude to first outline a plan. In VS Code or your script, you can ask: “List the steps you will take before writing the code.” This acts like a checkpoint in thinking – you get a plan you can verify or tweak. Once the plan looks good, you then greenlight each step. This not only yields better organized output but also creates intermediate checkpoints (each step’s result can be reviewed and undone if needed).
- Team Collaboration via Git: When multiple developers interact with Claude, consistency is key. Encourage the team to share effective prompts or strategies. If one developer found a great way to ask Claude to refactor code (e.g., a certain wording yields better results), codify that knowledge. Possibly create a
AI_PROMPTS_GUIDE.mdin your repo. Also, use Git for what it’s best at: versioning code changes. Even though Claude’s checkpoint system is helpful, always commit significant AI-generated changes to Git. Treat Claude’s contributions just like a human’s: code review them, test them, and commit them with appropriate messages. That way, if something goes wrong,git revertorgit bisectcan pinpoint the issue. Git also serves as the ultimate rollback for any changes, human or AI. Remember Anthropic’s advice: checkpoints are great, but use them alongside traditional version control. - Security and Privacy: Do not paste sensitive credentials or personal data into prompts. Claude is trained to be secure and avoid disclosing secrets, but it’s wise to handle secrets outside of prompts (e.g., use dummy values or describe the data abstractly). Additionally, since Claude can execute code via tools or give commands (especially in Claude Code agentic mode), be cautious with the actions you allow. Keep an eye on what suggestions you are blindly accepting — ensure you review changes, especially destructive ones (e.g., file deletions or database operations suggested by AI). Using the
/permissionscommand or allowed tools list (if using Claude Code) is recommended to prevent unwanted actions.
By following these best practices, you can create an efficient, cost-effective, and safe workflow. You’ll essentially develop a “Claude API VS Code integration” playbook for your team – covering prompt versioning, usage guidelines, and rollback strategies – leading to smooth “Anthropic Sonnet 4.5 coding workflows” in day-to-day development.
Conclusion
Integrating Claude Sonnet 4.5 with VS Code brings the power of a frontier AI model directly into your development environment. From writing boilerplate and fixing bugs to brainstorming architecture, Claude can act as a tireless pair programmer. With the checkpoint & rollback mechanisms in place, you gain confidence to let Claude make bold changes – knowing you can revert in seconds if needed. This safety net encourages exploration and faster iteration. Developers have observed significant improvements in productivity, with Claude handling grunt work and even complex multi-file refactors autonomously.
By following the steps outlined – setting up the API integration, using prompt chaining for context, logging interactions, and practicing prudent version control – you can create a robust workflow that leverages “Claude Sonnet 4.5 with VS Code” effectively. Whether you are using VS Code or another IDE, the principles remain: a tight feedback loop with AI assistance, safeguarded by good engineering practices, leads to better code and faster delivery.
In summary, Claude Sonnet 4.5 integration in VS Code can accelerate dev workflows and testing dramatically. It’s like having an expert engineer on call: one who can draft code, explain it, revise it, and even rollback gracefully when you change your mind. By optimizing prompts and using the checkpoint/rollback strategy, you’ll tap into Claude’s strengths (massive context, reasoning, coding know-how) while retaining control over your codebase’s evolution. Embrace this new AI coding partner – with careful setup and practice, it can transform your development experience for the better.

