Building real-time features can make an application feel alive. When data updates instantly, users stay engaged and informed. I want to walk you through some clear, practical ways to add this capability to a Ruby on Rails application. Each method has its place, depending on what you need to build.
Let’s start with the tool that comes with Rails itself. ActionCable is built right into the framework. It uses WebSockets to create a persistent, two-way connection between the server and the client’s browser. Think of it like a telephone line that stays open, allowing either side to talk at any time.
The core concept in ActionCable is the channel. A channel is like a dedicated room for a specific type of conversation. You might have a ChatChannel for messaging and a NotificationsChannel for alerts. Clients subscribe to the channels they care about.
Here is a basic channel for a chat room. When a user subscribes, they start listening to a stream. A stream is a specific flow of data, like all messages for room number 5.
class ChatChannel < ApplicationCable::Channel
def subscribed
stream_from "chat_room:#{params[:room_id]}"
end
def receive(data)
message = Message.create!(
content: data['message'],
user: current_user,
room_id: params[:room_id]
)
ActionCable.server.broadcast("chat_room:#{params[:room_id]}", message)
end
end
On the client side, in your JavaScript, you would connect to this channel. When the server broadcasts a message, your JavaScript code receives it and can update the webpage immediately. This is perfect for features like live chat, activity feeds, or real-time notifications that appear without a page refresh.
ActionCable is great because it’s integrated. You can use your existing Rails models and authentication. But for very high numbers of connections, you need to think about scaling. The built-in server works for development, but in production, you often need to connect it to Redis so multiple application servers can talk to each other.
That brings me to the next pattern. Using Redis and its publish-subscribe, or “pub/sub,” system is a common way to handle real-time events across multiple servers. Redis is very fast in-memory data store. The pub/sub feature lets one part of your application shout about an event, and any other part that’s listening can hear it.
In this setup, your main Rails application might publish an event to Redis when something happens, like a new comment. A separate process, perhaps the ActionCable server, is subscribed to Redis and waits for these events. When it hears one, it broadcasts the data to the connected clients via WebSocket.
Here is a simple publisher class.
class RealTimePublisher
def publish(event_type, payload)
message = {
id: SecureRandom.uuid,
type: event_type,
payload: payload,
published_at: Time.current.iso8601
}
Redis.current.publish('global_events', message.to_json)
end
end
# Somewhere in your controller after saving a comment
RealTimePublisher.new.publish('comment.created', { post_id: @post.id, author: current_user.name })
This separation is powerful. Your web application can focus on handling HTTP requests, and a dedicated service handles the WebSocket connections. They communicate through Redis. This makes your system easier to scale and manage.
Not every real-time feature needs a two-way WebSocket connection. Sometimes you only need the server to send updates to the client. For this, Server-Sent Events, or SSE, is a fantastic and often simpler choice. It works over a standard HTTP connection that stays open.
The client opens a connection to a special endpoint on your server. The server keeps that connection open and can send data down it whenever needed. The client uses the native EventSource API in JavaScript to listen for these messages.
Here is how you might set up an SSE endpoint in a Rails controller to stream notifications to a user.
class NotificationsController < ApplicationController
include ActionController::Live
def stream
response.headers['Content-Type'] = 'text/event-stream'
response.headers['Cache-Control'] = 'no-cache'
response.headers['Last-Modified'] = Time.now.httpdate
# Send a ping to keep the connection alive
sse = SSE.new(response.stream)
sse.write({ event: 'ping', data: 'connected' })
# Subscribe to a Redis channel for this user's notifications
redis = Redis.new
redis.subscribe("user:#{current_user.id}:notifications") do |on|
on.message do |channel, msg|
sse.write(JSON.parse(msg))
end
end
rescue IOError
# Client disconnected
ensure
redis&.unsubscribe
response.stream.close
end
end
SSE is supported in all modern browsers. It handles reconnection automatically if the network drops. I find it perfect for dashboards that show live metrics, news tickers, or simple notification streams. You don’t have to manage the complex state of a WebSocket connection.
A common need in real-time apps is knowing who is online. This is called presence tracking. You want to show a green dot next to a user’s name if they are currently connected to your application.
Implementing this requires you to track when a user connects and disconnects. With ActionCable, you can hook into the subscribed and unsubscribed methods in your channel. You need to store this state somewhere accessible, like Redis.
Here is a basic tracker that records when a user comes online.
class PresenceTracker
def user_connected(user_id, connection_id)
key = "presence:user:#{user_id}"
data = { connected_at: Time.current.iso8601, connection_id: connection_id }
Redis.current.hset(key, connection_id, data.to_json)
Redis.current.expire(key, 3600) # Expire after an hour of inactivity
broadcast('user.online', user_id)
end
def user_disconnected(user_id, connection_id)
key = "presence:user:#{user_id}"
Redis.current.hdel(key, connection_id)
# If no connections remain for this user, they are fully offline
if Redis.current.hlen(key).zero?
Redis.current.del(key)
broadcast('user.offline', user_id)
end
end
def online_users
# Logic to find all users with an active presence key
end
end
You would call user_connected when a WebSocket connection is established. The connection_id is important because a single user might have multiple tabs or devices open. You only want to mark them as offline when the last connection closes. You can then broadcast this “presence” information to other relevant users so their UI updates.
One of the more complex real-time features is collaborative editing, like in Google Docs. This is often solved with a technique called Operational Transformation, or OT. The idea is that when two users type at the same time, you need to merge their changes so everyone sees the same document.
The core challenge is order. If User A types “Hello” at the start of a document and User B types “World” at the end, their operations don’t conflict. But if they both type in the same position, you need a set of rules to decide the final result.
Here is a very simplified look at the concept. You don’t just send the final text; you send the operation, like “insert ‘cat’ at position 12.” Each operation has a version number.
def apply_operation(document, incoming_op, client_version)
# 1. Get all operations that have happened since this client last synced
pending_ops = pending_operations_since(client_version)
# 2. Transform the incoming operation against the pending ones
transformed_op = transform(incoming_op, pending_ops)
# 3. Apply the transformed operation to the document
document.content = apply_op_to_text(document.content, transformed_op)
# 4. Store and broadcast the new operation
store_operation(transformed_op)
broadcast_to_collaborators(transformed_op)
end
The transform function is the complex heart of OT. It adjusts an operation based on other operations that happened before it. For example, if someone else inserted text before your cursor position, your “insert at position 12” might need to become “insert at position 15.”
Implementing OT from scratch is a significant undertaking. Many teams use existing libraries. However, understanding this pattern helps you appreciate what’s needed for true, conflict-free collaboration.
When you open up real-time connections, you also open up new ways for your system to be stressed or attacked. A user could write a script to send thousands of chat messages per second. Another might try to open ten thousand WebSocket connections. This is why rate limiting for real-time endpoints is critical.
You need to limit actions over time. A common method is the “token bucket.” Imagine a bucket that can hold 60 tokens. Every time a user sends a message, you take one token out. The bucket refills with one new token every second. If the bucket is empty, the user must wait.
You can implement this in Redis with expiring keys.
class RealTimeRateLimiter
def limit_per_minute(user_id, action, limit: 60)
key = "ratelimit:#{user_id}:#{action}:#{Time.current.to_i / 60}"
current_count = Redis.current.incr(key)
Redis.current.expire(key, 120) # Expire in 2 minutes
if current_count > limit
raise "Rate limit exceeded for #{action}"
end
true
end
end
# In your ActionCable channel
def receive(data)
limiter = RealTimeRateLimiter.new
limiter.limit_per_minute(current_user.id, 'chat_message')
# ... proceed to process message
end
You should apply limits to different actions: messages sent, connections initiated, broadcasts triggered. This protects your server’s resources and ensures a single user cannot degrade the experience for everyone else.
Finally, real-time data isn’t just for user-facing features. It’s incredibly useful for internal dashboards and analytics. You can track events as they happen and aggregate them in real-time to show live activity charts.
This involves emitting an event for every meaningful action—a page view, a button click, a purchase. A separate aggregation service consumes these events and updates running totals, averages, or unique user counts.
Redis is again very useful here because of its speed. You can use a sorted set to store events by timestamp and then easily query “events from the last 5 minutes.”
class LiveAnalytics
def track(event_name, user_id, properties={})
timestamp = Time.current.to_f
event_data = { name: event_name, user: user_id, props: properties, time: timestamp }
# Store in a sorted set by timestamp
Redis.current.zadd('events:live', timestamp, event_data.to_json)
# Increment a rolling counter for this minute
minute_key = "count:#{event_name}:#{Time.current.to_i / 60}"
Redis.current.incr(minute_key)
Redis.current.expire(minute_key, 120)
# Publish for any live dashboard to hear
Redis.current.publish('analytics_stream', event_data.to_json)
end
def events_in_last(minutes=5)
cutoff = Time.current.to_f - (minutes * 60)
event_json = Redis.current.zrangebyscore('events:live', cutoff, '+inf')
event_json.map { |json| JSON.parse(json) }
end
end
You can then have a dashboard that connects via SSE or WebSocket to the analytics_stream. It receives events the moment they happen and updates charts and numbers without anyone hitting a refresh button. This gives you an immediate pulse on what’s happening in your application.
Choosing the right pattern depends on your specific need. For simple notifications, SSE might be the easiest path. For a full collaborative app, you’ll need WebSockets and a strategy like Operational Transformation. Remember to always include rate limiting and consider how you will scale the connection layer.
Start small. Add a simple live notification system with ActionCable. See how it feels. Then, as you need more advanced features, you can layer in these other patterns. The goal is to make your application feel responsive and connected, and these tools give you a solid foundation to build upon.