AgentAgent3
Agent
Design
Database
Publish
Security
Integrations
Mobile

Teams

Replit for teams to build together

Enterprise

Replit with Enterprise-grade security & controls

Use Cases

Business Apps
Rapid Prototyping

Roles

Enterprise

PM
Designers
Operations
Software Developers

Small Businesses

SMB owners
Founders

Get Started

Docs
Community
Expert Network

Inspiration

Customer Stories
Gallery
Blog
News
PricingCareersAgentAgent3
Contact salesLog inSign up
  • Pricing
  • Careers
Contact salesLog in
Start building
Wed, Sep 10, 2025 • Featured

Introducing Agent 3: Our Most Autonomous Agent Yet

We’re excited to introduce Agent 3—our most advanced and autonomous Agent yet. Compared to Agent V2, it is a major leap forward. It is 10x more autonomous, with the ability to periodically test your app in the browser and automatically fix issues using our proprietary testing system—3x faster and 10x more cost-effective than Computer Use models. Even better, Agent 3 can now generate other agents and automations to streamline your workflows. What’s New 1. App Testing: Agent tests the apps it builds (using an actual browser) Agent 3 now tests and fixes the app it is building, constantly improving your app behind the scenes. We are launching two different options here, depending on your needs:

    All
  • Design
  • Events
  • AI
  • Product
  • Engineering
  • Infrastructure
  • News
  • Builder Spotlight
  • Tue, Apr 18, 2023

    How to train your own Large Language Models

    Learn how Replit trains Large Language Models (LLMs) using Databricks, Hugging Face, and MosaicML Introduction Large Language Models, like OpenAI's GPT-4 or Google's PaLM, have taken the world of artificial intelligence by storm. Yet most companies don't currently have the ability to train these models, and are completely reliant on only a handful of large tech firms as providers of the technology. At Replit, we've invested heavily in the infrastructure required to train our own Large Language Models from scratch. In this blog post, we'll provide an overview of how we train LLMs, from raw data to deployment in a user-facing production environment. We'll discuss the engineering challenges we face along the way, and how we leverage the vendors that we believe make up the modern LLM stack: Databricks, Hugging Face, and MosaicML. While our models are primarily intended for the use case of code generation, the techniques and lessons discussed are applicable to all types of LLMs, including general language models. We plan to dive deeper into the gritty details of our process in a series of blog posts over the coming weeks and months. Why train your own LLMs?

  • Tue, Apr 11, 2023

    Replit Deployments - the fastest way from idea → production

    After a 5 year Hosting beta, we're ready to Deploy. Introducing Replit Deployments Today we’re releasing Replit Deployments, the fastest way to go from idea to production in any language. It’s a ground up rebuild of our application hosting infrastructure. Here’s a list of features we’re releasing today: Your hosted VM will rarely restart, keeping your app running and stable. You’re Always On by default. No need to run pingers or pay extra.

  • Mon, Apr 10, 2023

    Building the #1 LLM Comparison Tool with Bounties - nat.dev

    "Replit bounties are a great way to find talented developers for adventures!" Nat Friedman / former CEO of GitHub About the Bounty Posters Nat Friedman is the former CEO of GitHub. He previously co-founded Xamarin, which was acquired by Microsoft in 2016. Before GitHub, Nat held several leadership positions at Microsoft, including Corporate Vice President of Developer Services. Nat is most known for his contributions to open-source projects, most recently nat.dev and the Vesuvius Challenge. Matt Huang and Alexandr Wang also contributed to this Bounty reward.

  • Wed, Apr 5, 2023

    Hackers, Pros, and Teams users can now code for hours without restarts

    Stay Connected Starting today, all users on Hacker, Pro, or Teams plans will see a 10x reduction in container restarts while coding in the Workspace. Previously, you would experience a restart at least once an hour. Now you can code for multiple hours straight without restarts. Deep work can stay uninterrupted and you can keep programs running longer while you build. Repls are computers that live in the cloud. One of the most painful experiences with a cloud computer is losing your network link. Sometimes your network flakes out and things need to reconnect. But the worst version is when your Repl restarts. There are lots of reasons why this can happen. In the background, your container has stopped or died, and our infrastructure quickly starts up a new one to put you in. You can simulate this by typing kill 1 in the Shell.

  • Wed, Apr 5, 2023

    Replit + Chroma: AI for the next billion software creators

    Guest post by Chroma Today we’re announcing the Chroma template for Replit, the next step towards bringing the power of AI application development to the next billion software creators. With the Chroma template, developers can easily create AI applications with state and memory. Want to make ChatGPT for your email? Or chat to your textbooks while you study? Want LLMs to know about the latest news stories? Together with Replit, Chroma makes all that and more easy.

  • Tue, Apr 4, 2023

    Replit Bounties: The Best Place to Build and Launch MVPs

    "If I had turnkey access to this liquid, global talent pool in previous product roles, it would have been amazing for developing prototypes and taking ideas from 0 to 1." Christian Ulstrup / founder and CEO of GSD @ Work About the Bounty Poster Christian Ulstrup is the founder and CEO of GSD @ Work, a strategic consulting firm that advises pre-Series C startups and business leaders on strategy, product development, and GTM motion. Christian’s prior experience consists of all things product, from being a startup founder to leading cross-functional product teams at venture-backed medical device tech startups and larger companies like Red Bull.

  • Mon, Apr 3, 2023

    April 2 Potential GitHub Credentials Exposure

    Yesterday, on April 2, 2023, Replit discovered a site vulnerability that may have exposed GitHub auth tokens for <0.01% of Replit users, stemming from use of the GitHub import feature. This could have permitted unauthorized read/write access to all the repositories of those users by default (users can choose to authorize just a subset of repositories). We have no indication that those exposed tokens were misused or used to exploit GitHub repositories. The vulnerability has been fixed, all existing GitHub auth tokens associated with the Replit app have been revoked, and access to the GitHub import feature has been restored. The number of exposed users was limited to <0.01% because there are two preconditions that needed to be met for a user’s Repl to be vulnerable: The Repl was created using the GitHub import feature; and One of:

  • Thu, Mar 30, 2023

    Applications of Generative AI Webinar

    In case you missed it, last week we hosted legendary AI Researcher Jim Fan from NVIDIA AI for an incredible discussion on all things Generative AI. During a one-hour conversation with Amjad Masad, CEO and co-founder of Replit, and Michele Catasta, Replit's ML Advisor, the group discussed the recent advancements in AI and the potential impact of multi-modality on the field. Event Recap Jim Fan has worked in AI for a decade and has collaborated with several prominent AI researchers. He highlights the growth of AI from image recognition to large language models like GPT-4. Amjad shares his background in developer tools and his excitement about applying machine learning and statistical approaches to code. 2:50 - The discussion starts with the recent NVIDIA GTC event, with Jim describing NVIDIA's transition from a hardware provider to an enterprise-focused AI provider. He is mainly excited about NVIDIA AI Foundations, which offer customization services that allow enterprises to create unique use cases with multimodal language models. These models will also help incorporate images, videos, and 3D data into AI systems. 5:45 - Michele highlights the importance of multi-modality and how it grants superpowers in communications with computers. Jim envisions a range of possibilities with multi-modal language models, including being able to interact with more natural human input, enhancing note-taking, and automating home decoration plans.

  • Mon, Mar 27, 2023

    Replit and Google Cloud Partner to Advance Generative AI for Software Development

    Note: Since this announcement, Replit AI has launched a series of new features and updates. For most current information, check out our Replit AI page. Original press release here Under the new partnership, Replit developers will get access to Google Cloud infrastructure, services, and foundation models via Ghostwriter, Replit's software development AI, while Google Cloud and Workspace developers will get access to Replit’s collaborative code editing platform. The collaboration will accelerate the creation of generative AI applications and underscores Google Cloud's commitment to nurturing the most open ecosystem for generative AI. For Replit, already 20 million developers strong, this partnership with Google Cloud is its next move in realizing its mission to empower the next 1 billion software creators.

  • Mon, Mar 27, 2023

    Building Ghostwriter Chat

    At Replit, we strive to give our users the most powerful programming environment, and what better way is there than giving them an AI pair programmer directly in their workspace? Enter Ghostwriter Chat. Why we built Ghostwriter Chat Many IDEs today are not truly integrated; they lack the tools a developer interacts with throughout the course of their work. With Ghostwriter Chat, our goal is to give developers all the power they need without them ever having to leave the IDE. Gone are the days where you had to search Stack Overflow for an obscure error message, or visit the docs of your favorite package for the millionth time because you forgot what that one argument was called. And since Ghostwriter is built right into your repl, it can use things like file context, chat history, and program output to help you write code, answer questions, or even debug an error. No copying and pasting is required. We started working on Ghostwriter Chat during our Hackweek in January. The project took the first place prize, and we quickly found that we couldn’t live without it. We wanted to be the first to market with an LLM-powered chat application that is native to your editor. We decided to ship it, and did so in a month!

  • Thu, Mar 23, 2023

    Building Ghostwriter Chat

    Why did we build it? At Replit, we want to give people the most powerful programming environment, and what better way is there than giving people access to a pair programmer directly in their IDE? Enter Ghostwriter Chat. Gone are the days where you had to search Stack Overflow for an obscure error message or visit the docs of your favorite package for the millionth time because you forgot what that one argument was called. Interacting with Ghostwriter should be as easy as interacting with a team member, and it is. Since Ghostwriter Chat can have access to your repl file and context, Ghostwriter can help answer questions about your program without copying and pasting entire code blocks. We started working on Ghostwriter Chat during our Hackweek in January, and we wanted to be the first to market with an LLM chat application native to your editor. From start to finish, we shipped the product in ~6-8 weeks

  • Wed, Mar 22, 2023

    Announcing Outbound Data Transfer Limits

    Beginning April 7th, Replit will begin enforcing limits on the amount of outbound data that developers can transmit from their Repls to users and external services. Inbound data transfer is free. You can see how much outbound data transfer you've used on your account page. The meter resets at UTC midnight at the start of every month, and the base limit depends on your plan. Free tier developers will receive 10 GiB, Hacker developers 50 GiB, and Pro developers 100 GiB. These plan limits are captured on our pricing page. You may purchase additional outbound data transfer using Cycles or another payment method at $0.10/GiB. You will receive an email and on-plaform notification when you are approaching and when you have reached your limit so you can take action and keep your Repls running.

  • Sun, Mar 19, 2023

    An update to cover pages

    We're excited to announce that Replit rolled out a big visual update to cover pages last week. With this update, Repl content comes first more than ever before. Everything else is now off to the side, allowing for a cleaner and more focused app experience. Our goal with this update is to improve the user experience for both creators and visitors of Replit projects. Two key improvements come with this update:

  • Thu, Mar 16, 2023

    Worldwide Repls, part 3: Firing Up The Engines

    At Replit, we operate a cloud-based infrastructure that allows developers to collaborate and create within an all-in-one, integrated development environment. One of the most significant parts of this experience is the latency perceived by the developer when interacting with the workspace. While we can always add resources such as CPU, RAM, and storage on demand, when tackling latency we have to deal with some fundamental physical limits such as the speed of light. This means that you can only do so much for latency by throwing resources at it, and at some point, you just need to bring the server closer to the end user. Given that we want to provide a platform for the next billion software creators, that means that we need to have infrastructure distributed around the world. While we run many of the workspace interactions, such as file editing and browsing, locally on the user's browser, many tasks still depend on communicating with your Repl running on our servers. Examples of such interactions are typing into the shell and getting Language Server Protocol results like catching errors and finding symbol definitions. Because these interactions require communication between the user's browser and the Repl, the only way to reduce latency is to bring the two closer together. In doing that, we also ensure that each bit of the development feedback loop remains quick and efficient, improving the experience for the user. Replit's platform team works hard on improving the infrastructure at the core of Replit. This includes running containers, providing hosting, managing storage, and networking. We recently made some substantial improvements to the infrastructure: dividing our infrastructure into multiple failure domains with clusters, moving our global state management from Redis to an SQL-backed Control Plane, and creating our own load balancer for assigning Repls to machines. These improvements not only improve performance or minimize the surface for inconsistent states to show up, but at the end of the day, they provide an all-around superior experience to the developers on our platform. In this post, we'll give an overview of how Replit's infrastructure is organized, and then dive into detail about how we tackled the next step in improving the experience for a large number of developers: geographic distribution. Clusters: Isolation and Ease of Management

  • Thu, Mar 9, 2023

    Get Started with LLMs: AI Camp x Replit Course Now Available

    Replit and AI Camp are launching a brand new, 4-hour course, right here on Replit! Unlock the Power of LLMs like GPT with Python, is a four-lesson course that’ll teach you: How to access AI APIs Implementing GPT-2 Trade up to Gradio, Flan-T5 and GPT-3 Build your own auto-summarizer using GPT-3 — all from within a Repl! Gitless and instant, from start to running LLM App in the first 15 minutes.