How AI will change programming with Amjad Masad, CEO of Replit

In this episode of the Founders in Arms podcast, we sit down with Amjad Masad—co-founder and CEO of Replit—to discuss AI coding assistants, the future of software creation, developer productivity, large language models, AGI, and why the barrier between an idea and a working product is collapsing faster than most people realize.

This conversation dives deep into:

  • AI coding assistants and software creation

  • Replit’s long-term vision

  • How LLMs and transformers work

  • Developer productivity in an AI-first world

  • AGI, alignment, and reliability

  • The future of programming careers

If you're building in AI, SaaS, developer tools, or enterprise software, this episode is packed with technical insight and long-term thinking.

In this episode, we cover:

(00:00) Introduction and why this moment in AI matters

Amjad Masad joins the podcast to talk about one of the biggest shifts in computing in years: the rise of AI-assisted software creation.

The conversation opens with Replit’s mission:

  • Make software the fastest and most accessible it has ever been

  • Reduce the distance between an idea and a working product

  • Turn programming into a more collaborative, browser-based, AI-assisted experience

Amjad frames the current AI wave as a historic moment for builders—one that feels comparable to the rise of the modern web, but even faster.

(01:24) What Replit is building and the original product vision

Amjad explains that Replit started with a simple but ambitious idea: a collaborative online programming environment that makes software creation dramatically easier.

Over time, that vision expanded into:

  • Collaborative development in the browser

  • Community-driven software creation

  • Team workflows for developers

  • AI-powered coding assistance

  • A faster path from idea to product

His view is that much of Replit’s roadmap was visible early on. The hard part was not imagining it—it was actually building it.

(05:21) Why Replit serves such a broad range of users

Rather than optimizing for a narrow user persona, Amjad thinks about Replit through the lens of jobs to be done.

His core belief:

People do not come to Replit because they fit one demographic profile. They come because they want to make something.

That includes:

  • Students learning to code

  • Hobbyists building side projects

  • Professional developers shipping products

  • Teams collaborating on software

  • Founders prototyping startup ideas

The common thread is not background—it is intent.

(09:14) Why developers resist change more than people think

One of the most interesting parts of the conversation is Amjad’s argument that developers—despite being agents of technological change—are often extremely conservative about their own tools.

He points out:

  • Developers tend to stick with familiar workflows

  • Tooling habits change slowly

  • Programming often evolves “one generation at a time”

  • Innovation in developer tools is harder than outsiders assume

That helps explain why browser-based programming and collaborative coding were slow to emerge, even if the need seemed obvious.

(12:00) The core mission: reducing the distance between an idea and a product

Amjad describes Replit’s deeper mission as shrinking the time between having an idea and getting a real product into people’s hands.

His vision is that software creation keeps compressing:

  • From complex setup and manual coding

  • To collaborative development environments

  • To AI-assisted building

  • To natural-language prototyping

  • Eventually to near-instant MVP generation

He shares an example of a user posting an idea and getting a prototype in about 30 minutes—with a human builder accelerated by AI.

The long-term direction is clear: software gets faster to create, easier to test, and more accessible to more people.

(15:15) Why AI will generate prototypes before it replaces full software teams

Amjad’s take is nuanced.

He believes AI will soon be able to generate:

  • Initial apps

  • Rough MVPs

  • Basic software prototypes

  • Starting points for real products

But he does not believe AI will immediately replace the human work required to:

  • Iterate on edge cases

  • Make systems reliable

  • Maintain and scale software

  • Understand customers deeply

  • Turn prototypes into durable businesses

In other words, AI can get you started quickly—but human judgment still matters once software meets reality.

(16:49) How LLMs and transformers actually work

The episode goes deep into the technical side of large language models.

Amjad explains the transformer model in practical terms:

  • Transformers introduced attention mechanisms

  • Attention helps models focus on relevant parts of the input

  • Instead of hand-coding language rules, models learn patterns from data

  • With enough scale, these systems begin to show emergent reasoning abilities

He describes this shift as part of software 2.0:

Instead of programmers explicitly writing every algorithm, machine learning systems discover algorithms by optimizing over large datasets.

(23:27) Why scale alone was not enough for ChatGPT

Amjad argues that ChatGPT’s leap was not only about more parameters or more data.

He points to two major breakthroughs:

  • Supervised fine-tuning

  • Reinforcement learning from human feedback (RLHF)

These helped models become more useful, more conversational, and more aligned with what humans actually want.

His implication is important for founders:

The biggest gains in AI may not come only from scale. They may come from better training methods, better interfaces, and better system design.

(29:01) Why AI is still unreliable for full production software

One of Amjad’s core concerns is reliability.

Traditional software can be tested with deterministic engineering methods:

  • Unit tests

  • Program verification

  • Repeatable behavior

LLMs are different because they are probabilistic and stochastic. That makes them powerful—but also harder to trust in high-stakes environments.

This is one reason Amjad believes AI-generated full-stack products still need human oversight:

  • Models can hallucinate

  • Outputs are not always reproducible

  • Reliability remains hard to guarantee

  • Traditional engineering workflows do not map cleanly onto LLM behavior

(31:55) Constitutional AI, online learning, and what today’s models still lack

The discussion expands into newer ideas like:

  • Constitutional AI

  • Model interrogation by other models

  • Reinforcement from human feedback

  • Online learning during deployment

Amjad sees online learning as especially important for the future.

Today’s models usually need retraining after deployment. Truly general intelligence, in his view, would require systems that can:

  • Learn continuously

  • Adapt in production

  • Improve across domains

  • Update behavior without full retraining cycles

That is one of the key gaps between current LLMs and anything resembling AGI.

(37:54) The economics of AI and whether AI products are too expensive

The episode also covers the economics of inference.

Amjad notes that some AI products may look expensive today—but there is enormous room for optimization across:

  • Smaller domain-specific models

  • Better inference routing

  • Hardware improvements

  • More efficient chips

  • Software-level optimization

His view is that current AI economics are not fixed. They are early.

That means founders should be careful not to assume today’s cost structure is permanent.

(42:32) Why software creation may get dramatically cheaper

One of the biggest long-term predictions in the episode is that the cost of creating software will keep falling.

Amjad suggests that:

  • Basic app creation trends toward zero cost

  • MVP generation becomes more automated

  • Cloning or recreating simple software interfaces becomes easier

  • The real value shifts away from basic implementation and toward judgment, systems, customer understanding, and distribution

This has major implications for startups: building may become cheaper, but winning will still require insight.

(44:21) What happens to software engineers in an AI-first world

Amjad predicts a bimodal future for software talent.

The biggest winners may be:

  • Platform engineers working on low-level systems, infrastructure, and core tooling

  • Product-oriented builders who understand customers, markets, and product judgment

The group most at risk:

  • General-purpose “middle layer” developers doing repetitive application glue work

His argument is that AI will be especially effective at standard implementation tasks, while deeper systems work and product reasoning remain more defensible.

(47:13) How much more productive AI makes developers today

Amjad shares that early estimates suggest meaningful productivity gains from AI coding tools already.

He references a range of outcomes:

  • Conservative estimates around 20% improvement

  • Anecdotal reports of much larger gains

  • Some workflows feeling 2x faster

  • A belief that 10x productivity improvements may arrive over the next few years

The broader point is that AI copilots are not theoretical anymore. They are already changing how engineers work.

(50:36) Will software be rebuilt for AI—or layered onto existing systems?

A fascinating section of the conversation asks whether AI-native development will require new programming languages and new software architectures.

Amjad’s answer is pragmatic:

Most technological change layers on top of old systems instead of replacing them cleanly.

Just as the internet still carries the baggage of older abstractions, AI will likely be added to existing workflows before fully replacing them.

That means founders should expect messy transitions rather than clean resets.

(53:37) What makes an LLM different from AGI

The conversation then shifts into AGI.

Amjad draws a clear distinction between today’s LLMs and true general intelligence.

His view:

Current models are impressive and generalizable in narrow ways, but they still do not autonomously learn across entirely new domains the way humans can.

For example, current systems still struggle to:

  • Learn new environments independently

  • Maintain persistent state in a robust way

  • operate across domains without retraining

  • Form durable long-term goals without heavy scaffolding

That leaves a major gap between “useful AI” and true AGI.

(58:08) Consciousness, materialism, and whether intelligence is fully computable

One of the most philosophical sections of the episode centers on the limits of computational models of intelligence.

Amjad raises questions around:

  • Consciousness

  • Pain and pleasure as core features of experience

  • Materialist explanations of the mind

  • Whether intelligence is fully Turing-computable

  • The relevance of thinkers like Roger Penrose

  • The possibility that human reasoning is not fully captured by today’s computational models

He does not dismiss progress in AI—but he cautions against overconfidence in simplistic assumptions about consciousness and machine intelligence.

(1:03:21) Why AI alignment matters even without near-term AGI

Even though Amjad is skeptical of the most extreme AGI doom scenarios, he still takes alignment seriously.

His reasoning is practical:

Humans have historically struggled to align powerful systems with human well-being.

He compares AI alignment to capitalism:

  • Powerful optimization systems create huge value

  • But they also create side effects

  • Harmful outcomes often emerge unintentionally

  • Alignment is hard even when incentives are visible

So even if near-term AI remains narrow, it still matters how these systems are trained, deployed, and constrained.

(1:07:14) Weaponization, misuse, and the more realistic AI risks

Amjad suggests that more realistic near-term risks may include:

  • Weaponization

  • Automated trolling and manipulation

  • Harmful misuse by bad actors

  • AI combined with drones or autonomous systems

  • Dangerous amplification of political or social control

Rather than assuming instant sci-fi catastrophe, he points to a more grounded concern: powerful tools in the hands of humans with bad incentives.

(1:10:34) Why this feels like the biggest moment in tech in years

The episode closes with a strong reflection from Amjad on the pace of change.

He compares today’s AI moment to the early years of the web—but says this feels even bigger.

His message to founders and builders is clear:

  • This is a rare platform shift

  • The pace of progress is exhausting but real

  • There is huge opportunity for people willing to engage deeply

  • The builders who understand these tools early will have an edge

Key Takeaways for Founders

AI is collapsing the time from idea to prototype.
The biggest shift may be how quickly teams can go from concept to working software.

Developer productivity is already changing.
AI copilots are not hype alone—they are creating real output gains.

The future of software talent will polarize.
Platform engineers and product-minded builders may benefit the most.

LLMs are powerful, but still unreliable.
There is a big difference between useful generation and production-grade reliability.

AGI is uncertain, but alignment is urgent anyway.
Even narrow AI systems can create serious misalignment and misuse problems.

This is a major platform shift.
Founders who treat AI as foundational rather than optional may be better positioned over the next decade.

Listen to Founders in Arms

Founders in Arms is a podcast for ambitious builders—covering startup strategy, AI, developer tools, fintech, and the realities of scaling companies.

If you’re building in AI, SaaS, or developer infrastructure, this is required listening.

🎙 Subscribe to Founders in Arms on your favorite platform.
💬 Join the conversation at TribeChat.com.
🚀 Discover more insights from top founders and operators shaping the future of tech.

Previous
Previous

Delian Asparouhov on Space Manufacturing, Microgravity, and Building the First Space Factories

Next
Next

Parag Agrawal joins Founders in Arms