There is a default assumption in 2026 that goes roughly like this: AI work happens in Python. Models are trained in Python. Notebooks are Python. The big inference libraries are Python. The agent frameworks getting GitHub stars are Python. So if you are building anything that touches a model, the language is settled.
I have spent the last year building agentic products against Claude — real ones, with users, billing, real-time UI, the whole stack — and I have done it almost entirely in Ruby. Not because I’m a Ruby zealot. (Okay, slightly because of that.) But mostly because, once you actually try it, you discover something the discourse has missed: for the kind of AI work most engineers are actually doing in 2026, Ruby is a weirdly excellent fit.
This is the case for it.
The shape of the work has changed
Five years ago “AI engineering” mostly meant: train a model, evaluate a model, deploy a model. That work needs Python because the math libraries are Python. Nobody is going to rewrite PyTorch in another language and nobody should.
Today, for most engineers, “AI engineering” means something completely different. It means: take a hosted model someone else trained, wrap it in a product, give it tools, give it state, give it a user-facing surface, and ship.
The actual work is:
- HTTP calls to a model API
- Streaming responses back to a browser
- Persisting conversations and tool calls
- Background jobs that run agent loops
- A user interface that feels good
- Auth, billing, observability, the rest of the boring stuff
Look at that list. It’s a web app. It is the most web-app web app you’ve ever seen. And the language that has spent 20 years being the best in the world at building web apps is Ruby, with Rails on top of it.
Rails is an unfair advantage
The single biggest thing Ruby gives you is Rails. Not “Ruby and also Rails,” as if they’re separate gifts. Rails is the reason Ruby is interesting for AI work in 2026, full stop.
A Rails app comes with: a router, an ORM, migrations, jobs, mailers, caching, a real templating layer, asset pipelines, generators, conventions for testing, conventions for deploys, conventions for everything. You get all of that on day one. You don’t pick a stack. You don’t argue about FastAPI vs. Flask vs. Django vs. some new thing on Hacker News. You type rails new, and three minutes later you have a serious application.
For AI products specifically, this matters more than it sounds. The interesting work in an AI product is not the prompt. It is the plumbing around the prompt. It’s the conversation persistence. It’s the streaming UI. It’s the tool-call execution layer. It’s the background worker that runs an agent for ten minutes without blocking the request. It’s the rate limiting. It’s the audit log. It’s the billing.
Rails has all of that, ready to go, written by people who have been doing this since before “AI engineering” was a phrase.
Hotwire is the secret weapon
Here is the part nobody outside the Ruby community has noticed yet.
AI products live or die on their UI. A streaming token feed. A “the agent is thinking” indicator. A tool call appearing as a card. A progress bar for a long-running task. A diff view of a generated document. None of this is hard, but all of it is fiddly, and the React-ecosystem answer to it is “set up Next.js, add a streaming endpoint, write a custom hook, manage state, hope nothing rerenders too many times.”
The Hotwire answer — Turbo Streams plus Stimulus plus a server-rendered template — is roughly two files. You broadcast HTML over a WebSocket and it appears in the page. There is no client-side state. There is no hydration mismatch. There is no useEffect dependency-array bug at 11pm. The agent runs on the server, the server pushes HTML, the page updates. Done.
I built a real-time agent dashboard in an afternoon that would have taken me a week in React. I am not exaggerating. The architectural fit between server-side agent loops and server-rendered streaming UI is so good that the only reason more people aren’t doing this is that they don’t know it’s an option.
Ruby is a language that was designed to feel good
People treat this like a soft, aesthetic argument and dismiss it. They shouldn’t.
Here is a hard, economic version: the language you use shapes the prompts you write. When you ask an AI to generate code, you are implicitly asking it to generate code in some language. Languages with more ceremony — more type annotations, more imports, more boilerplate — produce longer prompts, longer responses, more tokens to read, more places for the model to go wrong.
Ruby is the lowest-ceremony language in widespread production use. A Ruby method does what it says on the tin. A Rails controller is twelve lines that read like English. The signal-to-noise ratio of a Ruby file is just higher than the equivalent in TypeScript or Python or, god help you, Java. When the model writes Ruby for you, it writes less of it, and what it writes is closer to the platonic shape of the thing you wanted.
This compounds. Over a thousand agent runs, “less code, clearer code” turns into “fewer mistakes, faster reviews, cheaper tokens.” It is not a vibe. It is a real, measurable advantage.
The libraries are actually fine
The standard objection is: “Sure, but the AI libraries are all in Python.”
Two answers.
First: which libraries do you actually need? For 95% of agentic products, the libraries you need are an HTTP client and a JSON parser. Ruby has had both since 1996. The major model providers all expose clean REST APIs. Calling Claude from Ruby is one method call. Streaming Claude from Ruby is one method call with a block. Tool use is a switch statement. There is no mystery here.
Second: the gems that do exist are quietly excellent. ruby-openai, anthropic, ruby-llm, instructor-rb, langchainrb — they cover the actual needs. The MCP ecosystem has Ruby clients. The vector store world has Ruby bindings for the obvious choices. You are not pioneering. You are just walking a slightly less crowded path.
The gap between “Ruby AI ecosystem” and “Python AI ecosystem” is real but it is concentrated entirely in the parts of the stack you don’t touch when you are building a product. Training? Python. Research? Python. Fine-tuning? Python. Calling a model from a real application that has users? Either works, and Ruby is more pleasant.
The agentic loop wants a long-running, stateful runtime
Here’s the thing nobody wants to admit about Python web frameworks: they were designed for stateless request/response, and agentic workloads are not that.
An agent loop is long-running. It holds a conversation in memory. It calls tools, waits for results, calls more tools, eventually responds. It is closer to a chat session than to a CRUD endpoint. The natural unit of work is “a process that runs for the lifetime of a conversation,” not “a function that runs for the lifetime of a request.”
Rails plus Solid Queue plus Action Cable handles this beautifully. You enqueue an agent job. The job runs in a worker process. As the agent thinks, it broadcasts updates over a channel. The browser sees them in real time. When the agent is done, the job ends. The state is in the database. The UI was always in sync.
You can absolutely do this in Python. People do. It just isn’t where the framework momentum is, and you end up bolting things together — Celery for jobs, FastAPI for HTTP, websockets bolted on, Redis for state, a separate React app for the UI. By the end you have built a Rails app, badly, in five different libraries.
SQLite, of all things
A surprise lesson from the last year: SQLite is the best database for AI products and Rails 8 made it production-grade.
AI workloads are read-heavy, write-occasional, and latency-sensitive. They want to be next to the application process. They do not want a network hop to a database server for every tool call. SQLite, in WAL mode, on a fast SSD, on the same machine as your Rails app, is faster than any networked database for the kinds of queries an agent actually issues.
Rails 8’s Solid trio (Solid Queue, Solid Cache, Solid Cable) all run on SQLite. You can build an entire serious AI product on a single VM with no Postgres, no Redis, no separate worker tier, no RabbitMQ. The operational surface is roughly nothing. You deploy a binary and a database file.
Try setting that up in the Python world. You will end up with six containers and a YAML file the length of a novel.
What you give up
I am not going to pretend this is free. The Ruby-for-AI bet has real costs:
- You will be a bit early. The bleeding-edge integrations land in Python first. If you need a brand-new model API on day one, you may be writing the gem yourself.
- The hiring pool is different. There are fewer Ruby engineers than Python engineers and almost none of them have “AI” on their LinkedIn. You will hire on aptitude, not keywords.
- The discourse is against you. Every blog post, every YouTube tutorial, every conference talk assumes Python. You will be translating in your head constantly.
- Some libraries genuinely don’t exist. If you need to do real ML work — embeddings on novel modalities, custom retrieval pipelines, anything that touches numpy — you will be making peace with calling Python from Ruby, or rewriting things.
These are real. I just don’t think they outweigh what you gain: a runtime that wants to host long-lived processes, a framework that already solved the boring problems, a UI layer that streams server-rendered HTML over WebSockets, and a language that produces less code per unit of intent.
A practical recommendation
If you are starting a new AI product in 2026 and your team’s strengths are in web engineering — not in research, not in training, not in custom inference — try Ruby. Specifically, try Rails 8 with Solid Queue and Hotwire. Wire up Claude or GPT or whatever you like through a thin gem. Stream responses with Turbo. Persist conversations in SQLite. Run jobs in the same process if you want, in workers if you don’t. Deploy to a single machine and feel slightly absurd about how simple it is.
Ship something in two weeks instead of two months. Then come back and tell me Ruby is dead.
The default assumption is wrong. The work has shifted. The tools that win in this new shape are the ones that already knew how to build web apps for users — and Ruby has been the best in the world at that for a long, long time.
It is just that nobody in the AI conversation has been paying attention.