We will use Rails to initialize the project demo-rack-attack:
# Create project
$ rails new demo-rack-attack
$ cd demo-rack-attack
# Add in gems
$ bundler add rack-attack
# Create a test controller
$ bin/rails g controller hello index
create app/controllers/hello_controller.rb
route get 'hello/index'
invoke erb
create app/views/hello
create app/views/hello/index.html.erb
invoke test_unit
create test/controllers/hello_controller_test.rb
invoke helper
create app/helpers/hello_helper.rb
invoke test_unit
We are now ready to update some configuration to set up our rate-limiting:
Updating our app config
Update config/application.rb to turn off forgery protection so we can use HTTPie for testing:
require_relative "boot"
require "rails/all"
# Require the gems listed in Gemfile, including any gems
# you've limited to :test, :development, or :production.
Bundler.require(*Rails.groups)
module DemoRackAttack
class Application < Rails::Application
# Initialize configuration defaults for originally generated Rails version.
config.load_defaults 7.0
# Configuration for the application, engines, and railties goes here.
#
# These settings can be overridden in specific environments using the files
# in config/environments, which are processed later.
#
# config.time_zone = "Central Time (US & Canada)"
# config.eager_load_paths << Rails.root.join("extras")
config.action_controller.default_protect_from_forgery = false if ENV['RAILS_ENV'] == 'development'
end
end
Updating the development environment
Rack Attack makes use of the Rails cache by default in Rails projects, so we need to set up our development cache to use Rails cache.
Update config/environments/development.rb to use the Rails cache:
require 'active_support/core_ext/integer/time'
Rails.application.configure do
# ... omitted
# Enable/disable caching. By default caching is disabled.
# Run rails dev:cache to toggle caching.
if Rails.root.join('tmp/caching-dev.txt').exist?
config.action_controller.perform_caching = true
config.action_controller.enable_fragment_cache_logging = true
# !!! CHANGE HERE
# config.cache_store = :memory_store
config.cache_store = :redis_cache_store, { url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/1') }
config.public_file_server.headers = {
'Cache-Control' => "public, max-age=#{2.days.to_i}"
}
else
config.action_controller.perform_caching = false
config.cache_store = :null_store
end
# ... omitted
end
Now that our Rails cache is configured for Redis, we can add our Rack Attack configuration to our application.
Updating our Rack Attack configuration
Inside of config/initializers/rack_attack.rb, add the following:
class Rack::Attack
### Configure Cache ###
# If you don't want to use Rails.cache (Rack::Attack's default), then
# configure it here.
#
# Note: The store is only used for throttling (not blocklisting and
# safelisting). It must implement .increment and .write like
# ActiveSupport::Cache::Store
# Rack::Attack.cache.store = ActiveSupport::Cache::MemoryStore.new
### Throttle Spammy Clients ###
# If any single client IP is making tons of requests, then they're
# probably malicious or a poorly-configured scraper. Either way, they
# don't deserve to hog all of the app server's CPU. Cut them off!
#
# Note: If you're serving assets through rack, those requests may be
# counted by rack-attack and this throttle may be activated too
# quickly. If so, enable the condition to exclude them from tracking.
# Throttle all requests by IP (60rpm)
#
# Key: "rack::attack:#{Time.now.to_i/:period}:req/ip:#{req.ip}"
throttle('req/ip', limit: 300, period: 5.minutes) do |req|
req.ip # unless req.path.start_with?('/assets')
end
### Prevent Brute-Force Login Attacks ###
# The most common brute-force login attack is a brute-force password
# attack where an attacker simply tries a large number of emails and
# passwords to see if any credentials match.
#
# Another common method of attack is to use a swarm of computers with
# different IPs to try brute-forcing a password for a specific account.
# Throttle POST requests to /login by IP address
#
# Key: "rack::attack:#{Time.now.to_i/:period}:logins/ip:#{req.ip}"
throttle('logins/ip', limit: 5, period: 20.seconds) do |req|
if req.path == '/login' && req.post?
req.ip
end
end
# Throttle POST requests to /login by email param
#
# Key: "rack::attack:#{Time.now.to_i/:period}:logins/email:#{normalized_email}"
#
# Note: This creates a problem where a malicious user could intentionally
# throttle logins for another user and force their login requests to be
# denied, but that's not very common and shouldn't happen to you. (Knock
# on wood!)
throttle('logins/email', limit: 5, period: 20.seconds) do |req|
if req.path == '/login' && req.post?
# Normalize the email, using the same logic as your authentication process, to
# protect against rate-limit bypasses. Return the normalized email if present, nil otherwise.
req.params['email'].to_s.downcase.gsub(/\s+/, "").presence
end
end
### Custom Throttle Response ###
# By default, Rack::Attack returns an HTTP 429 for throttled responses,
# which is just fine.
#
# If you want to return 503 so that the attacker might be fooled into
# believing that they've successfully broken your app (or you just want to
# customize the response), then uncomment these lines.
# self.throttled_response = lambda do |env|
# [ 503, # status
# {}, # headers
# ['']] # body
# end
throttle('example', limit: 5, period: 10.seconds) do |req|
if req.path == '/hello' && req.get?
req.ip
end
end
end
Most of the configuration comes from the Rack Attack basic config, but we added an additional throttle for the /hello route that we will be testing which will allow 5 requests every 10 seconds.
With the configuration done, all we need to do is set up our routes and controller.
Updating the Hello Controller
In app/controllers/hello_controller.rb, let's add a basic JSON response:
class HelloController < ApplicationController
def index
render json: { message: 'Hello World' }
end
end
Updating the routes
Finally, we need to update our routes to include our new /hello route.
In config/routes.rb:
Rails.application.routes.draw do
resources :hello, only: [:index]
# Define your application routes per the DSL in https://guides.rubyonrails.org/routing.html
# Defines the root path route ("/")
# root "articles#index"
end
We are now ready to test out our rate-throttling.
Testing our Rack Attack configuration
We need to toggle on the dev cache to use our Redis setup in development and then start the Rails server.
# Turn on cache and run server
$ bin/rails dev:cache
$ bin/rails s
We will be using ab to run multiple requests in quick succession to our new route.
# Use ab to run 10 requests in quick succession.
# Note that 5 fail due to throttling.
$ ab -n 10 http://localhost:3000/hello
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient).....done
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /hello
Document Length: 25 bytes
Concurrency Level: 1
Time taken for tests: 0.190 seconds
Complete requests: 10
Failed requests: 5
(Connect: 0, Receive: 0, Length: 5, Exceptions: 0)
Non-2xx responses: 5
Total transferred: 5180 bytes
HTML transferred: 185 bytes
Requests per second: 52.62 [#/sec] (mean)
Time per request: 19.005 [ms] (mean)
Time per request: 19.005 [ms] (mean, across all concurrent requests)
Transfer rate: 26.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 13 19 5.9 18 32
Waiting: 12 19 5.9 17 32
Total: 13 19 5.9 18 33
Percentage of the requests served within a certain time (ms)
50% 18
66% 20
75% 21
80% 24
90% 33
95% 33
98% 33
99% 33
100% 33 (longest request)
As we expected, out of the 10 requests we have 5 fail after reaching the rate-limit.
We can also see this reflected in the logs from Rails server:
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Processing by HelloController#index as */*
Completed 200 OK in 12ms (Views: 0.5ms | ActiveRecord: 0.0ms | Allocations: 2367)
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Processing by HelloController#index as */*
Completed 200 OK in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 114)
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Processing by HelloController#index as */*
Completed 200 OK in 1ms (Views: 0.3ms | ActiveRecord: 0.0ms | Allocations: 114)
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Processing by HelloController#index as */*
Completed 200 OK in 1ms (Views: 0.3ms | ActiveRecord: 0.0ms | Allocations: 114)
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Processing by HelloController#index as */*
Completed 200 OK in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 114)
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
Started GET "/hello" for ::1 at 2022-03-02 21:58:42 +1000
The last 5 are never completed.
In addition, if you are run redis-cli monitor in another terminal while making the requests, you will get something like the following:
This helps us to confirm that our Redis cache is being used for rate-limiting and also gives us insight into the commands invoked when we make all of our requests.
Looking deeper into the requests with HTTPie
If we use HTTPie to view our successful and failed requests, we will something similar to the following:
You can see that we get a 429 response code when we make too many requests to our application during the rate-limit period.
Summary
The above demonstrates a simple setup for rate-limiting using Rack Attack. We used Redis to store our rate-limiting data and we used HTTPie to view our successful and failed requests.