AgentAgent3
Agent
Design
Database
Publish
Security
Integrations
Mobile

Pro

Replit for serious builders

Enterprise

Replit with Enterprise-grade security & controls

Use Cases

Business Apps
Mobile Apps
Rapid Prototyping

Roles

Enterprise

PM
Designers
Operations
Software Developers

Small Businesses

SMB owners
Founders

Get Started

Docs
Community
Expert Network

Inspiration

Customer Stories
Gallery
Blog
News
PricingCareersAgentAgent3
Contact salesLog inSign up
  • Pricing
  • Careers
Contact salesLog in
Start building
Wed, Sep 10, 2025 • Featured

Introducing Agent 3: Our Most Autonomous Agent Yet

We’re excited to introduce Agent 3—our most advanced and autonomous Agent yet. Compared to Agent V2, it is a major leap forward. It is 10x more autonomous, with the ability to periodically test your app in the browser and automatically fix issues using our proprietary testing system—3x faster and 10x more cost-effective than Computer Use models. Even better, Agent 3 can now generate other agents and automations to streamline your workflows. What’s New 1. App Testing: Agent tests the apps it builds (using an actual browser) Agent 3 now tests and fixes the app it is building, constantly improving your app behind the scenes. We are launching two different options here, depending on your needs:

    All
  • Design
  • Events
  • AI
  • Product
  • Engineering
  • Infrastructure
  • News
  • Builder Spotlight
  • Edu
  • Fri, Sep 29, 2023

    Showcasing Startups on Replit

    The fastest way to start, ship, and share The Replit platform isn't just a sandbox; it's a launchpad. There’s a lot to learn from startups building on Replit and how they leverage the platform to monetize and grow. Whether it’s a solo developer bootstrapping their startup on Bounties, or a startup launching their production application on Replit Deployments, the businesses on Replit demonstrate that Replit is the best place for iteration and feedback cycles to carry a project from idea to software, fast. Here are a few companies building and shipping on Replit today. Superagent Superagent is an open-source framework that enables developers to integrate AI Assistants into any application in a matter of minutes. With Superagent’s library, you can build an AI assistant that browses the web, reviews pull requests, or executes code on your behalf.

  • Thu, Sep 28, 2023

    Changes to Hosting on Replit

    We remain committed to providing a powerful free development experience to anyone who wants to code. This post is only about the hosting experience, which we are migrating to our new Deployments product. In April of this year, we released Reserved VM Deployments. Then, we shipped Static and Autoscale Deployments. Since then, we’ve noticed even more companies hosting anything from microservices to their entire applications on Replit. Some of our favorite startups to watch include: HeyDATA LeapAI LlamaIndex

  • Wed, Sep 27, 2023

    Replit Core: Go from idea to software, fast

    The Replit Core subscription is the best developer tool subscription in the market to go idea to software, fast. And it is the most valuable. A similar dev environment can cost up to 10x more on GitHub Codespaces, and a high usage app will get crushed with overages 3-4x more expensive on Vercel (source). This end-to-end offering makes Replit Core the best subscription to build and launch your business. HeyDATA.org has gone from first line to +$200k ARR, and as founder, Steve Moraco says: "I don't think I would ever be able to complete a project like this without Replit. I started not knowing anything about web development, or even GitHub for that matter, and I've sort of just learned one skill at a time. I’ve gone from knowing almost nothing about technical development, to building a business and earning money" - Steve Moraco The Core bundle includes: 8 GiB RAM & 4 vCPU cloud-based development environment with no limits on usage, giving a powerful building experience Replit AI, powered by market-leading models (currently GPT-4). Debug, autocomplete, and turn natural language into code with one-click.

  • Tue, Sep 26, 2023

    Superagent.sh on Replit: An open-source framework for creating AI-assistants

    Demand for AI-driven solutions is surging, and using an AI-assistant is the fastest way to integrate AI into any product. Superagent’s assistants leverage large language models to understand human language, reason, and perform various tasks. In the spirit of “idea to software, fast”, superagent.sh used Replit to create an open-source, Agentic AI framework that enables any developer to integrate production ready AI Assistants into any application in a matter of minutes. Within 48 hours of Replit’s code-exec library release, superagent.sh added it to their core framework, deployed it (using Replit Autoscale), and created a Repl that enables anyone in the open source community to fork it, customize it, and deploy it themselves. What can Superagent.sh do? Superagent enables developers to create AI assistants for a wide range of tasks, including customer support, legal work, code reviews, content generation, and more.

  • Mon, Sep 18, 2023

    AI Agent Code Execution API

    Lately, there has been a proliferation of new ways to leverage Large Language Models (LLMs) to do all sorts of things that were previously thought infeasible. But the current generation of LLMs still have limitations: they are not able to get exact answers to questions that require specific kinds of reasoning (solving some math questions, for example); similarly, they cannot dynamically react to recent knowledge beyond a particular context window (anything that happened after their training cutoff window comes to mind). Despite these shortcomings, progress has not stopped: there have been advances in building systems around LLMs to augment their capabilities so that their weaknesses are no longer limitations. We are now in an age where AI agents can interact with multiple underlying LLMs optimized for different aspects of a complex workflow. We are truly living in exciting times! Code execution applications LLMs are pretty good at generating algorithms in the form of code, and the most prominent application of that particular task has been coding assistants. But a more significant use case that applies to everyone (not just software engineers) is the ability to outsource other kinds of reasoning. One way to do that is in terms of sequences of instructions to solve a problem, and that sounds pretty much like the textbook definition of an algorithm. Currently, doing that at a production-level scale is challenging because leveraging LLMs' code generation capabilities for reasoning involves running untrusted code, which is difficult for most users. Providing an easy path for AI Agents to evaluate code in a sandboxed environment so that any accidents or mistakes would not be catastrophic will unlock all sorts of new use cases. And we already see the community building upon this idea in projects like open-interpreter. Two options But how should this sandbox behave? We have seen examples of multiple use cases. Google's Bard recently released "implicit code execution,” which seems to be used primarily for math problems. The problem is boiled down to computing the evaluation of a function over a single input and then returning the result. As such, it is inherently stateless and should be able to handle a high volume of requests at low latency. On the other hand, ChatGPT sessions could benefit from a more stateful execution, where there is a complete project with added files and dependencies, and outputs that can be fetched later. The project can then evolve throughout the session to minimize the amount of context needed to keep track of the state. With this use case, it's fine for the server to take a bit longer to initialize since the project will be maintained for the duration of the chat session.

  • Thu, Sep 14, 2023

    Showcasing Startups on Replit

    The fastest way to start, ship, and share The Replit platform isn't just a sandbox; it's a launchpad. There’s a lot to learn from startups building on Replit and how they leverage the platform to monetize and grow. Whether it’s a solo developer bootstrapping their time on Bounties, or a startup launching their production application on Replit Deployments, the businesses on Replit demonstrate that Replit is the best place for iteration and feedback cycles to carry a project from idea to product market fit. To prove this, we're sharing a few companies building and shipping on Replit today. Leap API - Headshot AI Add image, music, and other AI generations to your app in minutes with Leap's API and SDK. Leap launched “Headshot AI”, an app that allows you to generate a professional headshot in minutes. No need to schedule a professional headshot session for that perfect LinkedIn Photo. Just upload some of your existing photos to fine-tune the model to your face and get your professional results in minutes.

  • Wed, Sep 13, 2023

    Introducing the Replit Desktop App

    Replit is first and foremost a cloud-based company. You can code, run, and deploy your favorite apps using virtually any technology or framework, all on one platform. But just because we’ve created a cloud-based product doesn’t mean that you have to be limited to using Replit in the browser. That’s why today we’re super excited to announce the official Replit Desktop App. About the app With the Replit Desktop App, you can finally enjoy a native Replit experience free of browser distractions, on macOS, Windows, and Linux. This new form factor allows you to stay focused on coding with a “zen-mode” like experience, quickly create multiple windows for different Repls, and easily access Replit from your dock or home screen.

  • Mon, Sep 11, 2023

    Deploy Bun Apps that Autoscale on Replit

    Using Replit Autoscale Deployments, developers can launch Bun apps that automatically scale from zero to meet customer demand, combining the speed of Bun and the power of Replit infrastructure. Build, iterate, deploy, and scale directly within Replit in seconds. Here’s how you can create and deploy a Bun App to Replit Autoscale Deployments in less than a minute:

  • Thu, Sep 7, 2023

    HeyDATA.org Profile: Personalized AI Built with Replit Deployments

    "I don't think I would ever be able to complete a project like this without Replit. I started not knowing anything about web development, (or even GitHub for that matter), and I've sort of just learned one skill at a time. I’ve gone from knowing almost nothing about technical development, to building a business and earning money" - Steve Moraco Steve Moraco leveraged Replit to build DATA, an AI service that replaces Siri with ChatGPT. In just a few months, Steve has scaled his AI business on Replit, growing rapidly to $18k MRR, 800 paid subscribers, and 100k+ impressions, with his site and project running on Replit Deployments. What is DATA?

  • Tue, Sep 5, 2023

    Announcing Autoscale and Static Deployments

    Today, we are announcing our biggest release yet. Scalable hosting infrastructure directly from the editor. This release transforms Replit into an end-to-end platform to go from idea to production for your next project or startup. Two new products are immediately available to all Hacker and Pro subscribers: Autoscale Deployments - Infrastructure that scales up when your app goes viral and scales down when your app goes unused. Only pay for the resources you use. Static Deployments - Free option to host client-side sites like blogs and websites. Best of all, it's directly from the editor. Deploy to scalable infrastructure directly from the place you build. No additional vendors. Just Replit.

  • Tue, Sep 5, 2023

    Speeding up Deployments with Lazy Image Streaming

    Replit Deployments is our new offering that allows you to quickly go from idea, to code, to production. To make the experience as seamless as possible, we built tooling to convert a Repl into a container imager which can be deployed to either a Google Cloud Virtual Machine or to Cloud Run. Early on, we started to hit some issues with large images taking too long to deploy to a virtual machine. It could take minutes to pull and unpack the container image before it could be started. There’s two angles of attack: reduce the image size or speed up the pulling of images. It's preferable to shrink the container image size; however, that is not always possible. In this post we’ll go into some of the technologies and approaches used to speed up image pulling/booting. What is a container image? First, we’ll need to establish some baseline knowledge around container images. If you already know the details, you can skip this section. At a high level, a container image provides both a root filesystem and configuration for running a containerized workload. Inside the container, the filesystem is mounted to the root directory /. The root filesystem is stored as a list of multiple compressed tarballs, called layers, which are overlaid on top of each other. That is, if two layers have the same file, layers later in the list have higher precedence and their files replace the files from lower layers.

  • Tue, Aug 29, 2023

    Showcasing Startups on Replit

    The fastest way to start, ship, and share This past year, our team has been hard at work releasing some of our most advanced infrastructure improvements including dedicated hosting and increased storage capacity. A second-order effect of a more powerful Replit is the rapid growth of businesses being built on the platform. The Replit platform isn't just a sandbox; it's a launchpad. There’s a lot to learn from startups building on Replit. The success stories emerging from our ecosystem serve as case studies for developers. Their architectural decisions, scaling strategies, and innovative solutions can provide insights for others looking to tread a similar path. Here are some of the newest startups deployed on Replit:

  • Sun, Aug 27, 2023

    Upgrading Analytics for Deployments

    Nine months ago, we launched analytics for every Repl. This feature allowed Explorers to view statistics about their Repl's visitors by appending /analytics to the end of Repl URLs. In the meantime, a lot has changed. Recently, we launched Reserved VM Deployments on Replit: an improved hosting service to quickly get you from idea to production. With Deployments, you can rest assured that your app will always be accessible even as you prototype changes, without needing to periodically ping the Repl to keep it running. Today, we’re excited to announce new and improved analytics for each of your Deployments! You can now find analytics in a tab under the Deployments pane in your Repl. As part of this release, the beta .repl.co analytics page will be deactivated. Analytics tab

  • Thu, Aug 24, 2023

    Packages: Powered Up

    Package management on Replit just got an upgrade. We’re releasing new features that make it faster to load, simpler to manage, and easier to troubleshoot packages for your projects. Read on to learn about the new additions or try it out now on Replit! Why we built the Packages tool Software projects already demand enough from developers implementing features, leaving those same developers little time to build everything from scratch. Pulling in code from other organizations or individuals can jump-start projects and help ensure their security, functionality, and integrity. However, existing package management tools are disconnected from each other and can be clunky to work with on the command line.

  • Thu, Aug 17, 2023

    Performance Mystery: Is Golang's Startup Time Slow?

    We at Replit pride ourselves on a snappy user experience. When I noticed our Universal Package Manager taking a slow ~200 ms to do even the most trivial operations, I took a look. Some context: Universal Package Manager, or UPM, is a package manager that works for a number of Replit-supported programming languages. It allows the Replit infrastructure to work with the same API, regardless of the language for the packaging aspect of the system. One important feature it offers is package guessing: the ability to look at your source files and figure out what packages you need automatically. However, since the package guessing operation has to happen when you click the Run button, it ever so slightly slows down the running of your code. I discovered that generally, regardless of which UPM operation was executed, it took at least ~200 ms for it to do the work. Given that UPM is written in Go — a language with a reputation for being fast — this was surprising.