Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Wed, 10 Sep 2025 15:09:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 225069128 Choosing the Right Model in Cursor https://frontendmasters.com/blog/choosing-the-right-model-in-cursor/ https://frontendmasters.com/blog/choosing-the-right-model-in-cursor/#respond Wed, 10 Sep 2025 15:09:34 +0000 https://frontendmasters.com/blog/?p=7083 A number of the big players are coming out with their own AI coding assistants (e.g., OpenAI’s Codex, Anthropic’s Claude Code, and Google Gemini CLI). However, one of the advantages of using a third-party tool like Cursor is that you have the option to choose from a wide selection of models. The downside—of course—is that, like Uncle Ben would always say, “With great power comes great responsibility.”

Cursor doesn’t just give you a single AI model and call it a day—it hands you a buffet. You’ve got heavy hitters like OpenAI’s GPT series (now including the newly-released GPT-5), Anthropic’s Claude models (including the shiny new Opus 4.1), Google’s Gemini, along with Cursor’s own hosted options and even local models you can run on your machine.

Different models excel in different areas, and selecting wisely has a significant impact on quality, latency, and cost. Think of it like picking the right guitar for the gig—you could play metal riffs on a nylon-string classical, but wouldn’t you rather have the right tool for the job?

A Word on “Auto” Mode

Cursor also offers Auto mode, which will pick a model for you based on the complexity of your query and current server reliability. It’s like autopilot—but if you care about cost or predictability, it’s worth picking models manually. Cursor’s documentation describes it as selecting “the premium model best fit for the immediate task” and “automatically switch[ing] models” when output quality or availability dips. In practice, it’s a reliability‑first, hands‑off default so you can keep coding without thinking about providers.

Use Auto when you want to stay in flow and avoid babysitting model choice. It’s especially handy for day‑to‑day edits, smaller refactors, explanation/QA over the codebase, and any situation where provider hiccups would otherwise force you to switch models manually. Because Auto can detect degraded performance and hop to a healthier model, it reduces stalls during outages or rate‑limit blips. 

Auto is also a good “first try” when you’re unsure which model style fits—Cursor’s guidance explicitly calls it a safe default. If you later notice the conversation needs a different behavior (more initiative vs. tighter instruction‑following), you can switch and continue. But, with that said, let’s dive into the differences between the models themselves for those situations where you want to take control of the wheel.

Nota bene: A lot of evaluating how “good” a model is for a given task is a subjective art. So, for this post, we’re going to be juggling a careful balance between my own experience and a requisite amount of reading other people’s hot takes on Reddit so that you don’t have to subject yourself to that.

Claude Models (Sonnet, Opus, Opus 4.1)

Claude has become a fan favorite in Cursor, especially for frontend work, UI/UX refactoring, and code simplification. I will say, I like to think that I am pretty good at this whole front-end engineering schtick, but even sometimes, I am impressed.

  • Claude 3.5 Sonnet – Often the “default choice” for coding tasks. It’s fast, reliable, and has a knack for simplifying messy code without losing nuance.
  • Claude 4 Opus – Anthropic’s flagship for deep reasoning. Excellent for architectural planning and critical refactors, though slower and pricier.
  • Claude 4.1 Opus – The newest version, with sharper reasoning and longer context windows. This is the model you pull out when you’re dealing with a sprawling repo or thorny system design and you want answers that feel almost like a senior architect wrote them.

Trade-off: Claude models are sometimes cautious—they’ll decline tasks that a GPT model might at least attempt. But the output is usually more focused and aligned with best practices. I’ve also noticed that Claude has a tendency to get side-tracked and work on other tangentially-related tasks that I didn’t explicitly ask for. That said, I’m guilty of this too.

GPT Models (GPT-3.5, GPT-4, GPT-4o, o3, GPT-5)

OpenAI’s GPT line has been the workhorse of AI coding.

  • GPT-3.5 – Blazing fast and cheap, perfect for boilerplate generation and small tasks.
  • GPT-4 / GPT-4o – Solid all-rounders. Great for logic-heavy work, nuanced refactors, and design patterns. GPT-4o is especially nice as a “daily driver” because it balances cost, speed, and capability.
  • o3 – A variant tuned for better reasoning and structured answers. Handy for debugging or step-by-step problem solving.
  • GPT-5 – The new heavyweight. Think GPT-4 but with significantly deeper reasoning, longer context, and a much better grasp of codebases at scale. It’s particularly strong at handling multi-file architectural changes and design discussions. If GPT-4 was like working with a diligent senior dev, GPT-5 feels closer to having a staff engineer who can keep the whole system in their head.

Trade-off: GPT models sometimes get “lazy”—they’ll sketch a partial solution instead of finishing the job. But when you want factual grounding or logic-intensive brainstorming, they’re hard to beat. GPT-5 in particular tends to go slower and check in more often. So, it’s a bit more of a hands-on experience than the Claude models. That said, given Claude’s tendency to go on side quests, I am not sure this is a bad thing. GPT-5 will often do the bare minimum but then come to you with suggestions for what it ought to do next and I find myself either agreeing or choosing a subset of its suggestions.

Gemini Models (Gemini 2.5 Pro)

Google’s Gemini slots in nicely for certain tasks: complex design, deep bug-hunting, and rapid completions. It’s more of a specialist tool—less universal than Claude or GPT, but very effective when you hit the right workload. Historically, one of the perks of Gemini is that it had a massive context window (around 2 million tokens). In the months since it was released, however, other models have caught up—namely Opus and GPT-5. Even Sonnet 4 now rocks a 1 million token context window.

I typically find myself using Gemini for research tasks. “Hey Gemini, look over my code base and come up with some suggestions for how I can make my tests less flaky and go write them to this file.” Its large context window makes it great for these kinds of tasks. It’s no slouch in your day-to-day coding tasks either. I just typically find myself reaching for something lighter—and cheaper.

DeepSeek Coder

Cursor also offers DeepSeek Coder, a leaner, cost-effective option hosted directly by Cursor. It’s good for troubleshooting and analysis, and useful if you want more privacy and predictable costs. That said, it doesn’t quite match the top-tier frontier models for heavy generative work. 

Local Models (LLaMa2 Derivatives, etc.)

Sometimes you just need to keep everything on your own machine. Cursor supports local models, which are slower and less powerful but guarantee maximum privacy. These shine if you’re working with highly sensitive code or under strict compliance requirements. This is not my area of expertise. Mainly because my four-year-old MacBook can’t run these models at the same speed as one of OpenAI’s datacenters can.

Model Selection Strategy

Here are some general heuristics I’ve found useful:

  • For small stuff (boilerplate, stubs, quick utilities): GPT-4o or a local model keeps things fast and cheap.
  • For day-to-day coding: Claude Sonnet 4 and GPT-4.1  are solid defaults. They balance reliability with performance. Gemini 2.5 Flash is also a strong contender in this department.
  • For heavy lifting (large refactors, architecture, critical business logic): GPT-5 or Claude Opus 4.1 are the power tools. They’re not cheap, but often it costs less to get it right the first time. What I’ll typically do is have them write their plan to a Markdown file, review it, and then let a lighter weight model take over from there.
  • When stuck: Swap models. If Claude hesitates, try GPT. If GPT spins in circles, Claude often cuts to the chase. This is not a super scientific approach, but it’s wildly effective—or at least it feels that way.
  • Privacy first: Use local models or Cursor-hosted DeepSeek when your code should never leave your machine. I’ve traditionally worked on open-source stuff. So, this hasn’t been a huge concern of mine, personally.

Editor’s note: If you really want to level up with your AI coding skills, you should go from here right to Steve’s course: Cursor & Claude Code: Professional AI Setup.

Evaluating New Models

New models drop all of the time, which raises the question: How should you think about evaluating a new model release to see if it’s a good fit for your workflow?

Capability—Can it actually ship fixes in your codebase, not just talk about them? Reasoning‑forward models like OpenAI’s o3 and hybrid “thinking” models like Claude 3.7 Sonnet are pitched for deeper analysis; use them when you expect layered reasoning or ambiguous requirements. 

Behavior—Does it take initiative or wait for explicit instructions? Cursor’s model guide groups “thinking models” (e.g., o3, Gemini 2.5 Pro) versus “non‑thinking models” (e.g., Claude‑4‑Sonnet, GPT‑4.1) and spells out when each style helps. Assertive models are great for exploration and refactors; obedient models shine on surgical edits. 

Context—Do you need a lot of context right now? If you’re touching broad cross‑cutting concerns, enable Max Mode on models that support 1M‑token windows and observe whether plan quality improves enough to justify the slower, pricier runs. Having a bigger context window isn’t always a good thing. Regardless of what the model’s maximum context window size is, the more you load into that window, the longer it’s going to take to process all of those tokens. Generally speaking, having the right context is way better than having more context.

Cost and reliability—Cursor bills at provider API rates; Auto exists to keep you moving when a provider hiccups. New models often carry different throughput/price curves—compare under your real workload, not just benchmarks. Cost is a tricky thing to evaluate because if a model costs more per token, but can accomplish the task in few tokens, it might end up being a bit cheaper when all is said and done.

Here is my pseudo-scientific guide for kicking the tires on a new model.

  1. Freeze variables. Use the same branch, same repo state, and the same prompt for each run. Turn Auto off when you’re pinning a candidate so you’re not measuring routing noise. Cursor’s guide confirms Auto isn’t task‑aware and excludes o3—so when you test o3 or any very new model, pin it. 
  2. Pick three task archetypes. Choose one surgical edit, one bug‑hunt, and one broader refactor. That trio exposes obedience, reasoning, and context behavior in a single pass. Cursor’s “modes” page clarifies that Agent can run commands and do multi‑file edits—ideal for these trials. 
  3. As Peter Drucker (or John Doerr, but I digress)  used to say: Measure what matters. For each task and model, record: did tests pass; how much did it modify; did it follow constraints; how many agent tool calls and shell runs; and wall‑clock duration. Cursor’s headless CLI can stream structured events that include the chosen model and per‑request timing—perfect for quick logging.

Repeat this process with Max Mode if the model you’re evaluating advertises giant context. You’re testing whether the larger window yields better plans or just slower ones.

Wrapping Up

Model choice in Cursor isn’t just about “which AI is best”—it’s about matching the right tool to the task. Claude excels at simplifying and clarifying, GPT shines at reasoning and factual grounding, Gemini offers design chops, and local models guard your privacy.

And with GPT-5 and Opus 4.1 now in the mix, we’re entering a phase where models can reason about your codebase almost like a human teammate. The trick is knowing when to bring in the heavy artillery and when a lighter model will do the job faster and cheaper.

]]>
https://frontendmasters.com/blog/choosing-the-right-model-in-cursor/feed/ 0 7083
Getting Started with Cursor https://frontendmasters.com/blog/getting-started-with-cursor/ https://frontendmasters.com/blog/getting-started-with-cursor/#respond Mon, 08 Sep 2025 15:07:56 +0000 https://frontendmasters.com/blog/?p=7062 I don’t love the term “vibe coding,” but I also don’t like doing tedious things.

Over the last few months, we’ve seen a number of AI-driven development tools. Cursor is probably the most well-known at this point. But big players are starting to come out with their own like OpenAI’s Codex, Anthropic’s Claude Code, Google Gemini CLI, and Amazon’s Kiro.

Think of Cursor as Visual Studio Code’s ambitious younger cousin—the one who not only borrows your syntax highlighting but also brings a full brain along for the ride—and is also a fork of its bigger cousin. In fact, if you weren’t looking closely, you could be forgiven for confusing it with Visual Studio Code.

I should note that Microsoft has also been racing to add Cursor-like features to Visual Studio Code and a lot of what we’re going to talk about here can also apply to Copilot in Visual Studio Code as well.

Editors note: If you really want to level up with your AI coding skills, you should go from here right to Steve’s course Cursor & Claude Code: Professional AI Setup.

Getting Set Up: The Familiar On-Ramp

If you’ve ever installed Visual Studio Code, you already know the drill. Download, install, run. Cursor smooths the landing with a one-click migration from VS Code—your extensions, themes, settings, and keybindings slide right over. Suddenly, you’re in a new editor that looks a lot like home but has some wild new tricks up its sleeve.

Once you’re settled, Cursor gives you a few ways to work with it:

  • Inline Edit (Cmd/Ctrl+K) – Highlight some code, and then tell Cursor what you want to happen (e.g. “Refactor this function to use async/await”), and watch Cursor suggest a tidy diff right in front of your eyes. Nothing sneaky—just a controlled, color-coded change you can approve or toss if it’s not what you had in mind.
  • AI Chat (Cmd/Ctrl+L) – This is like ChatGPT, but it knows your codebase. It hands out along-side your editor panes. Ask why a component is behaving weirdly, brainstorm ideas, or generate new code blocks. By default, it sees the current file, but you can widen its gaze to the whole repo with @codebase.
  • The Agent (Cmd/Ctrl+I) – For the big jobs. Describe a goal (“Add authentication with GitHub and Google”), and Cursor will plan the steps, touch multiple files, and even run commands—always asking before it does anything dangerous. This is where you go from “pair programmer” to “project collaborator.”

Some Inspiration for the Quick Editor

The inline editor is Cursor’s scalpel—it’s sharp, precise, and surprisingly versatile once you start leaning on it. A few of my favorite quick tricks:

  • Refactor without the tedium: Highlight a callback hell nightmare, hit Cmd/Ctrl+K, and ask Cursor to rewrite it with async/await. Boom—cleaner code in seconds.
  • Generate boilerplate: Tired of writing the same prop-type interfaces or test scaffolding? Select a stub, tell Cursor what you need, and let it flesh things out.
  • Convert styles on the fly: Need to move from plain CSS to Tailwind or from Tailwind to inline styles? Cursor can handle the translation with a single instruction.
  • Explain before you change: Select a gnarly function and just ask Cursor “explain this.” You’ll get a quick natural-language breakdown before deciding what to tweak.
  • Add guardrails: Highlight a function and say, “Add input validation with Zod,” or “Throw if the input is null.” Cursor will patch in the safety nets.

These tricks work best when you’re hyper-specific with what you want. Think of it less like a magic wand and more like a super helpful, pair-programming buddy who thrives on clear, concrete instructions. That’s the scalpel. But Cursor also gives you bigger hammers when you need them.

Getting the Most Out of the Chat and Agent

As I alluded too above, Chat (Cmd/Ctrl+L) is for conversation and exploration. It’s best for asking “why” or “what if” questions, brainstorming, or generating code you’ll shape yourself. I use this all of the time to think through various approaches before I write any code. I treat it like a co-worker that I’m bouncing ideas off of—except I don’t have to interrupt them.

  • Keep prompts specific (“Explain how this hook manages state across renders” beats “Explain this”).
  • Pull in the right context with @files or @codebase so answers stay grounded in your project.
  • Use it as a sounding board before you start refactoring—it’ll surface tradeoffs you might miss.

The Agent (Cmd/Ctrl+I) is for execution. Think of it as delegating work to a teammate who follows your plan:

  • Start with a high-level description, then ask the agent to generate a step-by-step plan before running anything.
  • Approve changes incrementally—don’t green-light a sweeping set of edits unless you’ve reviewed the plan.
  • Pair it with tests and Git. Strong test coverage makes it easy to validate the agent’s work, and frequent commits let you roll back if things get messy.
  • Use it for repetitive or cross-file tasks—things that would normally take you 20 minutes of hunt-and-peck are often solved in one go.

Here are some examples of things you might choose to toss at an agent:

  • “Add authentication with GitHub and Google using Supabase. Show me the plan first.”
  • “Migrate all class-based components in @components to functional components with hooks.”
  • “Convert this component to use Tailwind classes instead of inline styles.”

In short: chat is your whiteboard, agent is your task runner. Bounce ideas in chat, then graduate to the agent when you’re ready to automate.

Why Context Is Everything

In a large enough code base, you’re not going to be able to keep the entire thing in your head at any given time—and Cursor can’t either. In fact, this is probably one of the few places where you have an edge over an LLM—for now.

If you’re looking to get the most out of Cursor and other tools, then managing context is the name of the game. Sure, Cursor can index your code base, but sometimes that can be too much of a good thing. If you want to get the most out of a tool like Cursor, then you’re going to want to pull in the specific parts of your code base that you want it to know about. Otherwise, it’s hard to blame it if it starts heading off in a direction that you didn’t expect. If you didn’t explain what you wanted or give the necessary context to a human, it’s unlikely that they’re going to have what they need in order to be successful and Cursor is no different. Without context is like a smart intern working blindfolded. It might guess, it might improvise, and sometimes it invents nonsense (“hallucinations” is the fancy term). Feed it the right context, though, and Cursor becomes sharp, fast, and eerily helpful.

Context does a few magical things:

  • Cuts down on guesswork.
  • Keeps answers specific to your code instead of generic boilerplate.
  • Helps the AI reason about structure and dependencies instead of flailing.
  • Saves tokens, which means you save money.

Your job is to do the Big Brain Thinking™ about the overall big picture and then give Cursor the context it needs in order to do the tedious grunt work.

How Cursor Handles Context

Cursor is not leaving you high-and-dry in this regard. It has some built-in smarts: it grabs the current file, recently viewed files, edit history, compiler errors, and even semantic search results. It will follow your dependency graph and get read the first little bit of every file in order to get a sense of what it does.

But the real control comes from explicit context management.

  • @Files / @Folders – Point Cursor to exact code.
  • @Symbols – Zero in on a function, class, or hook.
  • @Docs – Pull in external documentation (yours or the framework’s).
  • @Web – Do a live web search mid-chat.
  • @Git – Bring in commit history or diffs.
  • @Linter Errors – Hand Cursor your error messages so it can fix them.
  • @Past Chats – Keep long conversations coherent.

That’s just the tactical layer. For strategy, Cursor gives you rules and Notepads.

  • .cursor/rules live in your repo, version-controlled, shaping Cursor’s behavior: “Always use React Query,” “Prefer async/await,” “Don’t leave TODO comments.” Think of them as your project’s constitution.
  • Notepads are like sticky notes on steroids—bundles of prompts, docs, and references you can inject whenever needed. They’re local, but great for organizing reusable prompts or team knowledge.

Notepads allow you to keep little snippets of information that you can reference at any time and pull into context—without having to type the same things over and over.

Here is an example of some rules to guide Cursor towards writing TypeScript and/or JavaScript in a way that aligns with your—or my, in this case—preferences:

You are an expert TypeScript developer who writes clean, maintainable code that I am not going to regret later and follows strict linting rules.

- Use nullish coalescing (`??`) and optional chaining (`?.`) operators appropriately

- Prefix unused variables with underscore (e.g., \_unusedParam)

# JavaScript Best Practices

- Use `const` for all variables that aren't reassigned, `let` otherwise
- Don't use `await` in return statements (return the Promise directly)
- Always use curly braces for control structures, even for single-line blocks
- Prefer object spread (e.g. `{ ...args }`) over `Object.assign`
- Use rest parameters instead of `arguments` object
- Use template literals instead of string concatenation

# Import Organization

- Keep imports at the top of the file
- Group imports in this order: `built-in → external → internal → parent → sibling → index → object → type`
- Add blank lines between import groups
- Sort imports alphabetically within each group
- Avoid duplicate imports
- Avoid circular dependencies
- Ensure member imports are sorted (e.g., `import { A, B, C } from 'module'`)

# Console Usage

- Console statements are allowed but should be used judiciously

Best Practices for Keeping Cursor Sharp

The one thing that I’ve learned from using Cursor every day for a few months now is that all of those Best Practices® that you know you’re supposed to do but you might’ve gotten sloppy with in the past? They’re extra important these days. For example, the better your tests are, the easier it is for Cursor to validate whether or not it successfully accomplished a task—and didn’t cause a regression in the process. It’s one thing to manually test your own code over and over, but it’s extra sobering to have to manually test code that you didn’t write. The better your Git etiquette is, the easier it will be to roll back to a known good state in the event that something goes off the rails.

  • Review before you merge. Always. The AI is good, but it’s not omniscient.
  • Commit early and often. Git is still your real safety net.
  • Be precise in prompts. “Make this more efficient” is vague. “Replace recursion with iteration to avoid stack overflow” is crisp.
  • Break it down. Ask Cursor to outline a plan before making changes.
  • Iterate. Think of it like a dialogue, not a vending machine.
  • Mind your open files. The fewer distractions, the better Cursor performs.
  • Keep files lean. Under 500 lines helps Agent mode stay accurate.
  • Stay private when you need to. Ghost Mode ensures nothing leaves your machine.

Wrapping Up

Cursor isn’t just an editor with AI bolted on. With proper context management, it becomes a thoughtful coding partner that amplifies your strengths, fills in gaps, and accelerates the mundane parts of software development. Used well, it’s less about “asking AI to code for me” and more about orchestrating an intelligent partner in your workflow.

TL;DR: The more precisely you guide Cursor, the more it feels like it really understands your project—and that’s when the magic happens.

]]>
https://frontendmasters.com/blog/getting-started-with-cursor/feed/ 0 7062
Out-of-your-face AI https://frontendmasters.com/blog/out-of-your-face-ai/ https://frontendmasters.com/blog/out-of-your-face-ai/#respond Sun, 01 Jun 2025 15:29:05 +0000 https://frontendmasters.com/blog/?p=6007 A very interesting aspect of the AI smashing its way into every software product known to man, is how it’s integrated. What does it look like? What does it do? Are we allowed to control it? UX patterns are evolving around this. In coding tools, I’ve felt the bar being turned up on “anticipate what I’m doing and offer help”. Personally I’ve gone from, hey that’s nice thanks to woah woah woah, chill out, you’re getting in my way.

I’m sure this will be fascinating to watch for a long time to come. For example, “Subtle Mode” in Zed gets AI more “out of your face” and if you want to see a suggestion, you press a button. I love that idea. But I also understand Kojo Osei’s point here: there should be no AI button.

]]>
https://frontendmasters.com/blog/out-of-your-face-ai/feed/ 0 6007
Notes on the Code Editors with AI Landscape https://frontendmasters.com/blog/notes-on-the-code-editors-with-ai-landscape/ https://frontendmasters.com/blog/notes-on-the-code-editors-with-ai-landscape/#comments Mon, 03 Feb 2025 20:29:54 +0000 https://frontendmasters.com/blog/?p=5080 No surprise to anyone: there is a lot of movement in the AI space in regards to helping with code. It’s a rather perfect use case for AI and a perfect set of customers, so I expect it to continue. I’m not an expert in any of this, but I’ve been having a play lately and figured I’d jot down notes.

Plugins

It seemed like offering AI plugins was the move at first. I imagine the thinking from the companies that make them was that you can come to developers, you don’t need developers to come to you. A lot of power can be had through a plugin. GitHub Copilot is a plugin, so clearly a big player in this market agrees. The rub is that what a plugin can do is entirely subject to the capabilities that editor makes available.

  • Various Editors + GitHub Copilot — This was the real shakeup combo that ignited this space. There was a good year or so that it was in technical preview and was free that got developers really stoked for the high quality auto completions. It felt like a real UX innovation to offer the gray ghost text out ahead of what you were already coding that you could tab to accept or cycle through other options. Copilot has of course grown up and now offers more features like chat and multi-file editing.
  • Various Editors + Tabnine — Tabnine pre-dates Copilot by a bit, but by the time I was playing with it, Copilot was dropping and it didn’t feel like they could co-exist inside one editor very smoothly. Around that 2021 timeframe, I remember being quite impressed how it did autocomplete that was clearly smarter than the normal IntelliSense and seemed to be informed by the active project. Tabnine seems to be going strong and has evolved since then, and it looks like has the same essential feature set as Copilot. Offers limited free plan then starts at $9/month.
  • VS Code + Cline — The Cline plugin seems to lean into the “Agent” capabilities of AI, meaning it does more than just suggest edits. It can create files, execute terminal commands (with permission), examine screenshots, and user a web browser. It’s free, which attracts some developers.
  • Various Editors + Codeium — Started as editor plugins, but crucially to me, also a browser extension for Chrome. This means that you could get AI autocomplete on certain site across the web automatically. It worked on CodePen and I’m still pretty impressed by the quality of the results there.
  • Various Editors + Cody — Cody is an AI plugin offering largely the same AI features as most of the others. But it appears to have some interesting differentiating extra features like sharing prompts across a team and gathering context from beyond just the code, like connecting to Notion. Offers limited free plan then starts at $9/month.
  • Various Editors + Augment — I haven’t had a chance to try Augument but it again looks like largey the same feature set. They call their proactive editing “Next Edit” which I like.

Forks of VS Code

It’s these that seem to be having a big moment right now and what interested me in trying new things out and ultimately writing up these notes. It turns out that plugins just aren’t offering the level of access and control that these AI companies want. They want more full control over the UI and UX, and since VS Code is open source, they can just fork it and do their own thing.

I also suspect that the term “Agents” is involved here, which refers to an AI being able to do more than just return text. Agents can run terminal commands, control a browser, feed answers back into themselves, crawl into a codebase as needed, understanding linters, etc. I suspect that if you’re trying to head heavily into this Agents idea, you want full control over the app such that you won’t run into walls you can’t tear down yourself.

  • Cursor — This was the first company I heard of that did this and I didn’t quite know what to make of it at first. Are they going to keep up with VS Code releases? (Like some niche browsers do with the browser engines they are created from). How much are they really changing about VS Code… is it worth it? I heard enough good things that I was convinced to give a paid plan a try and I can see what people mean! Cursor feels much more proactive about suggestions, which is perhaps my favorite feature. The UX of interacting with AI is essentially via autocompletes, inline chat, sidebar chat, or “Composer”. It’s a smidge confusing, but in practice the help tends to be there when you need it, and just think of Composer as the big fancy one that can deal with multiple files.
  • Windsurf — Codieum, who I already think does a good job with their AI products, seems to be quite all-in on this VS Code fork editor. There is a free plan, free trial, and plans starting at $15/month. I’ve been using it off an on for a few weeks and I find it almost as proactive as Cursor, if a bit more sluggish. They call their Agent thingy “Cascade” and I had one experience using it to fix a bug where I watched it to an extremely deep dive into the code I was working on and came up with a very good solution, so color me impressed with the quality.
  • Trae — This is quite the wildcard to me, but here we are. ByteDance, the Chinese company behind TikTok, has released Trae for free. It’s in the same bucket as these others, a VS Code fork, with various AI stuff built-in. Also like the others, it splits the UI/UX into “Builder” (which is like Compose in Cursor or Cascade in Windsurf) and “Chat” which is the quicker and more casual helper. Trae has no announced pricing at all, it’s just free, which will certainly drive adoption if raise a few eyebrows. I found the UI improvements over VS Code quite nice, the best of the bunch, but the AI help to be narrowly the worst of the bunch.
  • Aide — I haven’t tried Aide yet, but it looks like it’s extremely similar: proactive suggestions, Agent stuff, etc. Paid plans starting at $20/month. It being open source seems like a differentiator amongst this cohort.

Makes you wonder what Microsoft thinks about all this. Microsoft has done the heavy lifting here with VS Code and has their own business models centered around AI. Open source is open source and all, but it’s wild to see so many companies making money exactly in the same space with a thin veneer over the thing Microsoft has the most invested in.

Non-VS Code Forks

VS Code isn’t the only editor on the block, even if it is pretty huge these days.

  • Zed — Zed is an entirely new editor with good momentum and very strong bones. I’m certainly rooting for it! They have basic autocomplete going in there (arguably the most important AI feature), and what looks like a fairly fresh take on other tools. I’ve also heard tale of an active beta with even more, so it’s certainly something to watch.
  • LSP-AI — This is an open source language server, which would theoretically work with any editor, like SublimeText, NeoVim, or Emacs or whatever. I’ve only just heard of the Helix editor, which has some pretty big fans, so LSP-AI might be an answer in getting AI features into it, with it’s explicit language server support.

Other Editors

VS Code can run in the browser too, which you can see in Google’s Project IDX. Project IDX has very recently gotten AI chat built in, so it’s catching up with what we’re seeing in VS Code-based AI tools elsewhere.

The online editor Replit has essentially the same paradigms with “Agent and Assistant”. Focused on scaffolding out new projects, there is Bolt from the StackBlitz team which has seen enormous growth and support, as well as v0 from Vercel which helped them raise.

JetBrains are a big player in the editor market as well, and are in on all this as well with their own JetBrains AI, which has, no surprise, AI autocompletions and an “Assistant” tool for more elaborate and contextual chats. The language-specific nature of JetBrains editors may turn out to be a competitive advantage since the interaction with the models could be theoretically honed to be extra helpful with that language. But like I said in the opening, I’m more of a casual user than an expert at this point.

]]>
https://frontendmasters.com/blog/notes-on-the-code-editors-with-ai-landscape/feed/ 1 5080
The New Code Editor Zed has a Strong Start, and is now Open Source https://frontendmasters.com/blog/the-new-code-editor-zed-has-a-strong-start-and-is-now-open-source/ https://frontendmasters.com/blog/the-new-code-editor-zed-has-a-strong-start-and-is-now-open-source/#respond Sat, 03 Feb 2024 18:19:37 +0000 https://frontendmasters.com/blog/?p=721 I was compelled by the original release of Zed:

Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.

Atom was a pretty darn fine code editor, only scuttled by the fact that Microsoft bought GitHub back in 2018. Atom was GitHub’s thing, and Microsoft already had VS Code.

At the heart of a code editor is an extremely strong code parsing tool and that’s exactly what Tree-sitter is. Plus you can tell they are making polished design a priority. Now that Zed is open source, it bodes well. The FAQ says they plan to work on extensions after it’s open sourced. That seems prudent, as I suspect the average VS Code user has ~15 extensions customizing things and making it work well for their exact dev environment.

]]>
https://frontendmasters.com/blog/the-new-code-editor-zed-has-a-strong-start-and-is-now-open-source/feed/ 0 721