The Rapid Evolution of Coding Agents: Insights from Cursor's Jason Ginsberg

Jason Ginsberg from Cursor discusses the transformative changes in coding agents over the past year and their future potential in software development.

Image 1

The Rapid Evolution of Coding Agents

In the past year, the pace of change for coding agents has been so rapid that it is difficult to describe it merely as a “functional upgrade.”

A year ago, agents were primarily focused on code completion and making minor adjustments in a conversational manner. Today, engineers at Cursor are running multiple agents in parallel, allowing them to autonomously modify, debug, and review code in the repository, with human oversight only at the final stage. Developers are no longer watching every step of the agent’s operations; instead, they are getting used to “waiting for it to finish before checking the results.”

In a recent interview, Cursor’s engineering lead, Jason Ginsberg, made a clear assertion: this is not a gradual optimization but a generational shift. More importantly, he believes this change will occur within the next three to six months. In his view, agents will not only become “smarter” but will genuinely take over longer, more complex engineering tasks, reshaping the entire industry’s workflow.

A Year of Transformative Changes in Coding Agents

Harrison Chase: Jason, can you briefly introduce yourself and explain what Cursor is?

Jason Ginsberg: Sure. I am currently working on an AI programming tool and have been with Cursor for six months as the engineering lead for this product. To be honest, most of my daily work still involves coding and design. Before joining Cursor, I worked on Notion Mail at Notion. A few years ago, I founded a company called Skiff, which was later acquired by Notion. So, I have been focused on product development, mainly in the productivity tools sector.

Harrison Chase: That’s great. I have many topics to discuss with you. Let me start by asking about your views on the development of coding agents and the evolution of human-computer interaction models. You could be considered one of the pioneers in this field. I believe the development of coding agents has undergone several phases: from initial code auto-completion to conversational interactions integrated into IDEs, and now to various terminal tools and cloud-based asynchronous agents. How do you view this evolution?

Jason Ginsberg: I think the development of coding agents can indeed be described as “transformative,” and these changes have occurred in just over a year. As you mentioned, Cursor was the first to introduce code auto-completion, which primarily provided assistance on a line-by-line basis and was mostly limited to single files. Since then, we have had to elevate the product’s abstraction level almost every few months, which is a significant product design challenge. Clearly, the emergence of agents allows developers to switch flexibly between multiple files and confidently let agents autonomously complete code modifications.

In the past couple of months, I’ve noticed a new shift in the industry: developers can now fully trust agents from project initiation to completion and conduct batch reviews of multiple files in the codebase. Therefore, we had to significantly redesign the overall product layout, shifting the focus from line-by-line code comparison to a more code review-oriented approach.

Looking ahead, our development focus will increasingly be on the collaborative operation of multiple agents. We need to enable quick validation of whether these agents are functioning correctly and allow them to work in parallel without being constrained by the various options and choices in the current single-dialogue mode.

Harrison Chase: What are the core factors driving these changes? Is it simply the improved performance of large models, or are there other influencing factors?

Jason Ginsberg: I believe the improvement in large model performance is a key factor, as it allows developers to trust the quality of code generated by agents more. Previously, everyone had to conduct very thorough reviews of the code generated by agents.

Additionally, there are now more sophisticated code review tools. For example, we have BugBot, and there are many similar tools in the market that can automatically check for issues in the code.

Moreover, I think the acceptance and confidence of developers in agent tools have been steadily increasing, to the point where they have become “addicted” to the convenience these tools offer. Once accustomed to relying entirely on agents for coding, switching back to traditional coding methods can be quite challenging. As a result, we are seeing more and more developers adopting agent-assisted programming as their default mode of operation.

The Secrets of Top Engineers: Relying on Agents?

Harrison Chase: What differences have you observed in how people use Cursor? Or how do you personally use Cursor?

Jason Ginsberg: Internally, our engineers use Cursor in a variety of ways. There are even a few engineers on the team who do not use the agent features at all, such as those responsible for security and infrastructure. So, there is indeed a portion of users who heavily rely on the code auto-completion feature, with most of their operations based on that. Surprisingly, I have found that some of the top engineers on the team, whom we call “core users,” rely entirely on agents for their work and even run multiple agents in parallel to handle tasks.

As for my personal usage habits, I do not design complex prompts or have any so-called “agent usage secrets.” My prompts are often quite short and may even contain spelling errors. I start multiple agents simultaneously for different tasks or different modules of the same problem and then wait for their results.

Currently, the feature I use the most is a new debugging mode we just released today. In this mode, agents can generate logs for self-evaluation, and developers can reproduce the relevant operational steps. The agent will then check the logs to determine if the issue has been resolved. This feature is very practical because it allows for continuous attempts to solve problems through computational power, ultimately tackling those issues that are extremely difficult to troubleshoot manually.

Harrison Chase: What is the debugging mode like? Why is there a need for a dedicated mode? Can’t debugging be done automatically? Can’t we just give the agent debugging instructions?

Jason Ginsberg: I actually agree with your point. So, during the development of the debugging mode, we had quite a bit of internal debate. The main reason is that Cursor already has many functional modes, such as planning mode, inquiry mode, etc., which are not easy for users to discover. We always believed that these modes are very practical, and ideally, the agent should automatically match and enable the most suitable mode based on the user’s operational context, without requiring manual switching.

Currently, the debugging mode needs to be manually activated because its interaction method is quite special. During operation, the agent pauses its current work to ask the user for feedback. If the user is not familiar with this interaction logic, it may be somewhat confusing.

Harrison Chase: What kind of questions does the agent ask, and what kind of feedback does it require from the user?

Jason Ginsberg: Let me give you an example. Suppose I am developing a front-end application and encounter a frustrating issue: the menu always pops up in the top left corner. I would tell the agent, “This menu needs to be anchored to the button’s position.” Then, the agent would start the server and add a lot of logs throughout the codebase while proposing a series of hypotheses that could lead to the issue, such as “It might be a positioning parameter error” or “There might be an issue with the event binding logic.” After that, the agent would prompt me, “Please click this button to open the menu and see if the issue is resolved.” If I report that the issue still exists, the agent would check the generated logs and analyze, determining which hypotheses are valid. Usually, after two or three iterations of this process, the agent can identify and resolve the issue.

Harrison Chase: How long do you think humans will still need to perform manual operations? Can’t the agent autonomously handle clicks and tests?

Jason Ginsberg: In one to two months, given the rapid pace of development in this industry.

Harrison Chase: Earlier, you mentioned various modes of the agent, such as planning mode, inquiry mode, debugging mode, etc. What do these modes mean in practical application? Is it just about setting different prompts for the agent, or is there more complex logic behind them?

Jason Ginsberg: Many times, it is indeed just a matter of modifying system-level prompts. However, in some cases, we also need to make corresponding adjustments to the user interface. For example, the planning mode now also includes an interactive questioning feature that actively interrupts user operations during execution to seek feedback. Users can sometimes set parameters themselves, such as adjusting the frequency of agent interruptions. As for inquiry mode, it does not just rely on specific system prompts but also restricts the agent from calling certain file editing-related tools to ensure the stability and reliability of the functionality.

Harrison Chase: Returning to the previous topic, regarding the different ways people use Cursor, do you think there is a so-called “best way” to use coding agents or Cursor in the future?

Jason Ginsberg: I don’t think there is a “best way.” The specific usage method largely depends on the individual engineer’s work habits and the specific tasks they are handling. Currently, there are both asynchronous applications of agents and modes where developers are deeply involved in real-time interactions, much like programming while visually adjusting code or conducting visual editing operations. However, I often see some so-called “agent usage tips” on Twitter, and I am somewhat skeptical about them. Many people claim, “This is the best way to use agents,” but in my opinion, these tips are often fabricated.

Internally, our team does not use long, complex prompts or adopt multi-stage planning strategies. Most of the time, we iterate quickly. If the results of the agent’s operation are not satisfactory, we simply terminate the process and restart the agent. Typically, this method is the most efficient.

Is Natural Conversation the Ultimate Interaction Mode for Cursor?

Harrison Chase: If you were to predict the situation a year from now, how do you think developers will use Cursor across IDEs, terminals, and other forms?

Jason Ginsberg: Of course, I would have a certain subjective bias. But I believe terminal tools will not become the users’ first choice. I think what truly drives industry development is the increasing trust users have in agents. They prefer to wait until the agent has completed all tasks before reviewing the final modifications and deciding whether to adopt them, and they are also willing to let the agent run longer to achieve smarter processing.

The importance of IDEs lies in the fact that they are tools tailored for the entire software development cycle. From project planning to running code modifications, reviewing code content, clearly comparing code differences, submitting code merge requests, and previewing effects in the browser, all these steps can be seamlessly integrated into the modular functionality of IDEs. This is something that can easily be overlooked, as these IDE features have been refined over decades of development.

I believe a clear trend in the current industry is that product-level design is becoming increasingly important. Now, the most frequently used features by Cursor users, such as planning mode, actually require support from visual editors. Users need to be able to add comments in the editor and interact in real-time. Once detached from visual interactive elements like buttons, pop-ups, and menus, the difficulty of user interaction with tools increases significantly.

However, I believe that not all operations in the future need to be confined to the IDE on a laptop. This mode will not be completely replaced; the specific usage scenarios will flexibly change based on actual needs, and the applicable scenarios will become broader. Users will be able to use tools like Cursor in more contexts.

Harrison Chase: There will be more scenarios where tools like Cursor can be used. You must have a corresponding website, right? Can users interact directly on the web? Is that the idea?

Jason Ginsberg: Yes, we do have a website. The reason for this is that users can access it anytime and anywhere through devices like smartphones. I believe that in the near future, users will be able to wear AirPods, activate voice mode, and communicate in real-time with the agent, brainstorming ideas and allowing the agent to continuously optimize solutions. When users arrive at the office and open their laptops, they will already have a pile of code modification records or demo videos waiting for review, at which point they will only need to confirm or reject them. If some details need fine-tuning, they can download the project locally for modifications.

Harrison Chase: I think Cursor’s real advantage lies in the comprehensive design and user experience system built around agent interaction. You previously worked at Notion, and I remember that even before the rise of generative AI, Notion’s design and user experience were already widely recognized. Of course, they have also successfully transformed in the era of generative AI. From a company with an excellent design foundation before the generative AI boom to one now focused on agent-related work, how do you think the emergence of agents has changed product design and user experience? Are the current work modes similar to those before?

Jason Ginsberg: Overall, I believe that most of our product design is not AI-exclusive. The interactive components and user experience patterns available for products are limited, and applications on the market are fundamentally built on some traditional models, such as inboxes, dashboards, and chat interfaces, which are all mature designs. Therefore, our core work is more about reasonably combining these existing design patterns and presenting them appropriately in the product. This is in line with Notion’s product philosophy and is also a core characteristic of Cursor and integrated development environments (IDEs): a high degree of modularity.

As a user, you will find that everyone’s IDE interface layout can vary significantly. You can customize the panel layout, dragging and dropping any component to any position, creating a completely different interface from your colleague sitting next to you. I believe this modular design is crucial for product adaptability, as, as I mentioned earlier, the capabilities of agents are evolving rapidly, and user needs and expectations change almost every few weeks. When we launched Cursor 2.0 a few months ago, we did not completely overhaul the original product; we simply rearranged the various functional modules into a sidebar inbox-style management layout while optimizing the information density of the chat interface.

Harrison Chase: It sounds like many components share underlying logic. Have any new components emerged? Or have the priorities of certain components changed? After all, these components were initially designed for “human-software interaction” and “human collaboration through software,” and now with the introduction of agents, has anything changed?

Jason Ginsberg: I believe the underlying design logic and core elements have not changed; the key change is who is leading the interface interaction. Within this core framework, countless interaction forms can evolve. For example, a year ago, when people used agents, they were eager to watch every step of the operation, closely monitoring everything. But now, the operational steps of agents have become incredibly complex, and users simply cannot keep up. Therefore, we need to optimize how information is presented: how to group operational steps? How to distill key information?

Once users trust the agent’s operations enough, we need to focus on the actual content of file modifications and provide more detailed annotations for these modifications. Of course, we can further enhance the flexibility of interactions, such as allowing conversations not to be limited to a single agent but to engage with multiple agents simultaneously. This requires a more intelligent backend interaction logic to support it, where the system must recognize which sub-agent the user is conversing with and coordinate these agents to complete the corresponding modifications. In the future, this level of interaction abstraction will continue to rise.

Harrison Chase: What do you think is the highest level of interaction abstraction that can be achieved? I know predicting the future is difficult, but I would still like to hear your thoughts.

Jason Ginsberg: I think in the future, various operational options we currently see, such as selecting models, choosing functional modes, and selecting operating environments, will gradually disappear. The final interaction mode will become as natural as conversing with a real person. However, this does not mean that anyone can write code casually; at that stage, this tool will still serve professional engineers. Because you still need to have a grasp of industry-specific terminology and understand what you want to modify. Product people need to clarify their desired workflows and functional requirements; infrastructure people need to have a solid understanding of the codebase and know what architecture and system design are most suitable for the project they are developing.

I also want to emphasize that as the level of abstraction increases, we will not discard existing functionalities. Users can still dive deep into the details and adjust parameters at any time. The default interaction mode of the product will just continue to optimize and upgrade.

Inside Cursor: Less Code Review, More Frequent Feedback

Harrison Chase: You previously mentioned the role of humans in the agent workflow, such as reviewing code differences and conducting code reviews. How do you think AI will change the code review process?

Jason Ginsberg: First of all, in terms of our product team’s workflow, the proportion of manual reviews has significantly decreased. We have a tool called BugBot that automatically detects code issues and autonomously completes fixes, continuously iterating and optimizing within the continuous integration (CI) process. This tool performs exceptionally well and has given us more confidence in the quality of AI-reviewed code.

Secondly, there is semantic grouping of information. When users review code differences, they can clearly see what modifications the agent has made. We can even display the agent’s original instructions, and ideally, the agent could annotate each modification with explanations of why it was made when handling large code merge requests. While this may not be a revolutionary change, it can significantly optimize the code review process.

Harrison Chase: Out of curiosity, I want to ask, do Cursor engineers write code using Cursor and have BugBot review the code? Do they still need to communicate and collaborate with other engineers?

Jason Ginsberg: Haha, that’s an interesting question. If you join Cursor as an engineer, you will immediately notice that everyone is deeply using our own product. I remember during my first week, I modified a shortcut setting. That shortcut was Alt+Shift+Command+J, which is quite obscure, and I thought no one would notice it. However, less than half a minute after I made the change, three colleagues messaged me on Slack, saying, “The shortcut you changed has disrupted my workflow! What happened?” Almost any product change receives immediate and strong feedback from colleagues. I think this is a good thing; everyone is rapidly advancing product iterations through this high-frequency feedback and communication.

Harrison Chase: From an organizational management perspective, have you taken any measures to encourage or guide this high-frequency feedback collaboration model? After all, a large volume of feedback can sometimes be overwhelming.

Jason Ginsberg: Before I founded my own company, engineers would communicate via email, but it wasn’t used much. People even said, “Email is only for receiving spam and shopping notifications; don’t use it to send lengthy work content.” In the agent space, there is no need to rely on the inefficient communication method of email. Everyone on our team is fully engaged in their work, as this is a highly competitive field, and everyone is passionate about product development, naturally using various instant communication tools for collaboration.

Additionally, when planning product features, I follow a core principle: What features can I develop to make my daily work easier? Specifically, I think about “What can help me work more efficiently tomorrow without dealing with annoying errors and issues?” This principle guides most of our work. After all, once such features are developed, we can immediately benefit from them, like fixing an annoying bug so that we won’t be troubled by it again at work.

The Core Features Driven by Employees’ Needs

Harrison Chase: How much of your product roadmap is driven by the need to “make work easier for ourselves”? How much comes from external user needs? Has this proportion changed as the company has grown?

Jason Ginsberg: This proportion has indeed changed as the company has scaled. We now also set monthly product roadmaps and goals, but to be honest, many of our core features have come from bottom-up innovation. For example, the agent feature of Cursor is probably the core feature that comes to mind when people think of Cursor. This feature was developed by one of our team members, and initially, no one believed in the idea, but he quickly created a prototype. After everyone tried it, they were amazed, saying, “Wow, this thing really works!”

The debugging mode I mentioned earlier is similar. During the Thanksgiving holiday, I was bored and developed this feature that I needed, and now it is about to be launched. The initial intention behind developing these features was to address internal needs. We assess whether a feature is ready for release based on its internal usage rate and recognition.

Harrison Chase: Your product iteration speed is astonishing. How do you maintain such an efficient development rhythm?

Jason Ginsberg: To be honest, our workflow is very streamlined, without too many cumbersome systems. While there are a few meeting rooms in the company and one or two product managers, we rarely advance work through writing documents or holding alignment meetings. Most discussions and decisions are made at the code level. The core reason this is possible is our extremely high talent requirements. Earlier this year, the company had only about 20 people. The reason for the slow growth in team size is that our hiring standards are almost harsh. We repeatedly evaluate: this person is excellent, but can they become one of the top people in the team?

Because everyone in the team is outstanding, we can confidently assign tasks to anyone. Team members are highly proactive, from proposing ideas and designing user experiences to responding to user support requests on Twitter, communicating requirements with enterprise clients, and ultimately implementing features. Therefore, our ability to maintain this speed ultimately comes down to the people.

Harrison Chase: How do you plan your product roadmap? You mentioned a monthly planning cycle; is this the standard planning duration now? Is there any longer-term planning? Additionally, the pace of technological iteration in the industry is incredibly fast. How do you balance “keeping up with existing technology trends” and “achieving technological breakthroughs”? Do you actively anticipate technological trends and lay out future directions in advance?

Jason Ginsberg: We do invest a lot of energy in thinking about the future, such as anticipating potential technological breakthroughs in the next three months and proactively betting on related directions. The monthly roadmap we set is more focused on core product features, addressing actual user needs and those features that can optimize daily usage experiences. Major projects that require two months to reconstruct underlying logic will be included in longer-term planning.

Moreover, our adaptability is quite strong. Sometimes we receive early access to test versions of new models, and after trying them out, if we find they perform exceptionally well in certain areas, team members often voluntarily work overtime on weekends to complete related feature development before the new model is officially released. Many important features can actually be built in just a few days.

Harrison Chase: Speaking of models, you released your self-developed Composer model. What was the intention behind developing this model? How is user adoption currently? Has this model changed how people use Cursor?

Jason Ginsberg: We found that the coding scenarios in which engineers use our product require a model specifically tailored to support them. The Composer model is designed for these scenarios, with a clear focus on speed, quality, and intelligent logic, making it particularly suitable for “human-machine real-time collaboration” scenarios. I frequently use it in my front-end development because I need to make frequent subtle interaction design decisions, which requires the agent to provide feedback within seconds. Composer acts like an efficient collaborative partner, quickly responding to needs and brainstorming ideas, complementing models suitable for long-term asynchronous tasks very well.

Harrison Chase: Is the research and development of Cursor’s agent-related work a team effort, or is there a dedicated team responsible for it?

Jason Ginsberg: We do have a dedicated team responsible for optimizing the performance of agents, focusing mainly on building toolchains, scheduling frameworks, and effect evaluations. However, as I mentioned earlier, our team structure is not rigid, and there are no strict limitations on everyone’s work scope. For instance, if engineers from the core product team need to make adjustments to the agent while developing the planning mode, they will closely collaborate with the agent team. Moreover, during the development process, we still deeply use our own products for testing, and team members share their experiences to evaluate the actual effectiveness of features.

Harrison Chase: Do members of the agent team or other engineers skilled in agent development share any common traits? Are there any particular aspects of their professional background or personal abilities?

Jason Ginsberg: I think most of them are more product-oriented talents rather than traditional machine learning or algorithm research experts. These individuals often rotate between different teams because developing agents requires a strong intuition for the final user experience and the ability to accurately interpret team feedback.

Harrison Chase: Last week, you collaborated with OpenAI to publish a blog about optimizing Cursor’s agent scheduling framework based on OpenAI’s new model. I often see discussions about the concept of “agent scheduling framework” on Twitter. How do you view the underlying support architecture for models? Does this architecture need to be deeply bound to specific models? For example, would the architecture for the Composer model differ significantly from that for the CodeLlama model?

Jason Ginsberg: I haven’t been deeply involved in this area of work, but to my knowledge, our core goal is to create a highly flexible architecture. After all, we need to continuously experiment with new technologies and functional modes, so the architecture must quickly adapt as model capabilities upgrade.

Harrison Chase: That makes sense. The entire industry is indeed changing rapidly.

Open Q&A

Questioner 1: Earlier, you mentioned the new visualization browser feature. I noticed that some tools like Lovable also have similar features. Is this feature developing towards “immersive visual coding”?

Jason Ginsberg: I don’t think it is designed for immersive visual coding. As I mentioned earlier, this feature was initially developed for myself, as I am a product engineer, and its core user group is actually professional engineers and designers. When developing applications, everyone has encountered situations where a carefully designed interface ends up becoming the same old purple-yellow gradient that everyone is tired of. This feature is intended to allow users to have precise control over details, such as adjusting padding to exact pixel values. It provides users with a more intuitive “visual operation language,” which is more precise than pure text commands.

Moreover, even without using the sidebar, you can directly click on page elements and input prompts to issue commands at any time. With this feature, you can start six agents simultaneously in just a few seconds. If you enable hot reloading, your website will present modification effects in real-time, which is quite interesting to use.

Questioner 2: I particularly love your browser agent and have been using it. However, I noticed a small flaw: I want to continuously iterate and optimize design solutions, but the agent always interrupts my work by directly submitting code merge requests. Is there a possibility of achieving uninterrupted continuous iteration in the future?

Jason Ginsberg: Absolutely. The future direction is to enable the agent to have autonomous evaluation capabilities, allowing it to run continuously for extended periods and iterate based on needs. The current debugging mode still requires manual clicks to confirm log information, but this is just a transitional solution. The ideal state is for the agent to autonomously complete evaluations and iterations until the issue is fully resolved.

Questioner 3: I don’t know if you are deeply involved in the development of agent-related work, but I noticed that Cursor’s memory management feature is quite good. It can autonomously manage relevant information based on individual engineers, departments, and even the entire company’s preferences, rules, and processes. We all know that information and context are crucial for agents. Do you have plans to further expand and upgrade this feature? Especially regarding long-context processing, what ideas do you have?

Jason Ginsberg: We are conducting a lot of experiments and explorations. We have already implemented several functional modules such as rule management, memory recall, and skill libraries. Currently, we are primarily researching efficient information summarization techniques. Additionally, with our self-developed model, we are exploring ways to enable the model to autonomously identify key information that repeatedly appears in conversations or code. Of course, cross-organizational information sharing is also worth exploring. However, there is a point to note: relevant rules and information may become outdated with model iterations. Therefore, we must ensure that users can easily update this content to avoid being constrained by outdated rules.

Questioner 4: Regarding the Composer model you released, I know some developers who fine-tuned a specialized model for the medical field based on the Gemini model. However, they found that the fine-tuned model performed worse than directly using the native Gemini model for single prompt calls. They analyzed that the reason is that fine-tuned models require continuous maintenance to keep up with updates to foundational models like Gemini. How do you formulate strategies to ensure that the Composer model does not become outdated?

Jason Ginsberg: You are referring to the Composer model, right? We will continuously iterate and optimize it; it is not a static model. Our core focus is to find the best balance between speed and intelligence to meet Cursor users’ needs in most scenarios. However, we do have room for improvement in specific areas like long-context processing.

Questioner 5: I am a product manager and have been using Cursor for prototype development, even playing the role of a designer in my team, using it to replace Figma. I am curious if there are users who, before using Cursor, had never installed any integrated development environment (IDE)? Will this group of users become a key focus for you in the future? After all, the current coding agents are already powerful enough to accomplish many tasks.

Jason Ginsberg: To be honest, we are not currently focusing on this group of users as a core target. Of course, we recognize that the usability of tools needs to be continuously improved, and the ease of use of Cursor is also steadily increasing, such as the new browser tool being friendly for designers. However, our core goal is actually to empower top engineers. We have been thinking about how to make the best engineers in the world even stronger. In this process, the tools we develop will naturally benefit a broader audience. However, we still have a lot of work to do in product optimization, such as improving onboarding and environment configuration processes. After all, designers and product managers often encounter difficulties when configuring tools like GitHub. We hope to attract more users to try Cursor by optimizing these aspects.

Questioner 6: I have been trying to use Cursor to build a verification matrix for smart contracts and test run logic. Do you have any lesser-known practical workflows to recommend for deep quality testing and security reinforcement? Or can the debugging tools mentioned earlier come in handy? I am particularly interested in the quality testing of smart contracts.

Jason Ginsberg: To be honest, we are trying to enable the agent to autonomously complete testing tasks, but this feature has not been fully released yet. For those involved in quality testing, I strongly recommend trying out our newly released debugging mode. This feature has a very clear logic for identifying issues, and it can be said to be deterministic, which will be very helpful.

Questioner 7: What do you think is the biggest opportunity for Cursor in the next two to four months? Will it be the voice agent?

Jason Ginsberg: I think the opportunity does not lie in the voice agent. The core need of users at this stage is actually to make agents smarter, run longer, and handle more tasks. Many current agents essentially only “read code” and cannot genuinely determine whether the modified code is effective. There is a vast space for future development; we can invest more computational power to allow agents to take on more of the verification work currently handled by humans. I believe that in the next three to six months, the entire industry will undergo significant changes, which is very exciting.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.