Performance: Difference between revisions
No edit summary |
No edit summary |
||
Line 189: | Line 189: | ||
* https://hazelweakly.me/blog/scaling-mastodon/ | * https://hazelweakly.me/blog/scaling-mastodon/ | ||
* https://www.digitalocean.com/community/tutorials/how-to-scale-your-mastodon-server | * https://www.digitalocean.com/community/tutorials/how-to-scale-your-mastodon-server | ||
== See Also == | |||
* [[ElasticSearch]] - which needs plenty of perf tuning | |||
[[Category:Mastodon]] | [[Category:Mastodon]] | ||
[[Category:Admin]] | [[Category:Admin]] | ||
[[Category:Tech WG]] | [[Category:Tech WG]] |
Revision as of 00:14, 26 November 2023
Up to: Tech WG
How 2 make the masto run good.
Changes Made
The changes we have actually made from the default configuration, each is either described below or on a separate page:
- Split out sidekiq queues into separate service files
- Optimized postgres using pgtune
Archive
Olde changes that aren't true anymore
- Increase Sidekiq
DB_POOL
and-c
values from from 25 to 75- 23-11-25: Replaced with separate sidekiq service files
Sidekiq
The Sidekiq queue processes tasks requested by the mastodon rails app.
There are a few strategies in this post for scaling sidekiq performance.
- Increase the
DB_POOL
value in the default service file (below) - Make separate services for each of the queues
- Make multiple processes for a queue (after making a separate service)
Default Configuration
By default, the mastodon-sidekiq
service is configured with 25 threads, the full service file is as follows:
[Unit]
Description=mastodon-sidekiq
After=network.target
[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="DB_POOL=25"
Environment="MALLOC_ARENA_MAX=2"
Environment="LD_PRELOAD=libjemalloc.so"
ExecStart=/home/mastodon/.rbenv/shims/bundle exec sidekiq -c 25
TimeoutSec=15
Restart=always
# Proc filesystem
ProcSubset=pid
ProtectProc=invisible
# Capabilities
CapabilityBoundingSet=
# Security
NoNewPrivileges=true
# Sandboxing
ProtectSystem=strict
PrivateTmp=true
PrivateDevices=true
PrivateUsers=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictAddressFamilies=AF_INET
RestrictAddressFamilies=AF_INET6
RestrictAddressFamilies=AF_NETLINK
RestrictAddressFamilies=AF_UNIX
RestrictNamespaces=true
LockPersonality=true
RestrictRealtime=true
RestrictSUIDSGID=true
RemoveIPC=true
PrivateMounts=true
ProtectClock=true
# System Call Filtering
SystemCallArchitectures=native
SystemCallFilter=~@cpu-emulation @debug @keyring @ipc @mount @obsolete @privile>
SystemCallFilter=@chown
SystemCallFilter=pipe
SystemCallFilter=pipe2
ReadWritePaths=/home/mastodon/live
[Install]
WantedBy=multi-user.target
Separate Services
Even after increasing the number of worker threads to 75, we were still getting huge backlogs on our queues, particularly pull
which was loading up with link crawl workers, presumably the slower jobs were getting in the way of faster jobs and they were piling up.
We want to split up sidekiq into multiple processes using separate systemd service files. We want to a) make the site responsive by processing high-priority queues quickly but also b) use all our available resources by not having processes sit idle. So we give each of the main queues one service file that has that queue as the top prioriry, and mix the other queues in as secondary priorities - sidekiq will try and process items from the first queue first, second queue second, and so on.
So we allocate 25 threads (and 25 db connections) each to four service files with the following priority orders. Note that we only do this after increasing the maximum postgres connections to 200, see https://hazelweakly.me/blog/scaling-mastodon/#db_pool-notes-from-nora's-blog
- default, ingress, pull, push
- ingress, default, push, pull
- push, pull, default, ingress
- pull, push, default, ingress
And two additional service files that give 5 threads to the lower-priority queues:
- mailers
- scheduler
(each service file looks like this:)
Environment="DB_POOL=25"
ExecStart=/home/mastodon/.rbenv/shims/bundle exec sidekiq -q push -q pull -q default -q ingress -c 25
and is located in /etc/systemd/system
with the name of its primary queue (eg. /etc/systemd/system/mastodon-sidekiq-default.service
)
Then we make one meta-service file mastodon-sidekiq.service
that can control the others:
[Unit]
Description=mastodon-sidekiq
After=network.target
Wants=mastodon-sidekiq-default.service
Wants=mastodon-sidekiq-ingress.service
Wants=mastodon-sidekiq-mailers.service
Wants=mastodon-sidekiq-pull.service
Wants=mastodon-sidekiq-push.service
Wants=mastodon-sidekiq-scheduler.service
[Service]
Type=oneshot
ExecStart=/bin/echo "mastodon-sidekiq exists only to collectively start and stop mastodon-sidekiq-* instances, shimmi>
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
and make the subsidiary service dependent on the main service
[Install]
WantedBy=multi-user.target mastodon-sidekiq.service
This lets sidekiq use all the available CPU (rather than having the queues pile up while the CPU is hovering around 50% usage), which may be good or bad, but it did drain the queues from ~20k to 0 in a matter of minutes.
Postgresql
PGTune
Following the advice of PGTune ( https://pgtune.leopard.in.ua/ ), postgres is configured like:
/etc/postgresql/15/main/postgresql.conf
# DB Version: 15
# OS Type: linux
# DB Type: web
# Total Memory (RAM): 4 GB
# CPUs num: 4
# Connections num: 200
# Data Storage: ssd
max_connections = 200
shared_buffers = 1GB
effective_cache_size = 3GB
maintenance_work_mem = 256MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 2621kB
huge_pages = off
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
References
- https://thenewstack.io/optimizing-mastodon-performance-with-sidekiq-and-redis-enterprise/
- https://thomas-leister.de/en/scaling-up-mastodon/
- https://hazelweakly.me/blog/scaling-mastodon/
- https://www.digitalocean.com/community/tutorials/how-to-scale-your-mastodon-server
See Also
- ElasticSearch - which needs plenty of perf tuning