I am using django-sentry to track errors in a website. My problem is that the database has grown too big. The 'message' table and the 'groupedmessage' are related
Is there any way to clear older entries and specific messages or to add the sentry tables to the admin of django?
Use the cleanup command. Two parameters are available:
Use it as such (--config
now dropped):
# sentry --config=sentry.conf.py cleanup --days 360
sentry cleanup --days 360
OR with an optional project parameter, it must be an integer:
# sentry --config=sentry.conf.py cleanup --days 360 --project 1
sentry cleanup --days 360 --project 1
I also was able to use the django ORM to do this manually, before discovering the cleanup command:
#$source bin/activate
#$sentry --config=sentry.conf.py shell
from sentry.models import Event, Alert
Event.objects.all().count()
Alert.objects.all().count()
Check out the sentry models to query other objects. From here you can use the django ORM commands like .save(), .remove() etc.. on the objects. Check out the available sentry models here. This approach is a bit more flexible if you need the granularity, i.e. modifying objects. One thing that the cleanup command lacks is telling you how many objects it deleted, it dumps the deleted objects to the screen instead.
My cleanup script looks like this and I run it @monthly using cron:
#!/bin/bash
date
cd /var/web/sentry
source bin/activate
# exec sentry --config=sentry.conf.py cleanup --days 360 #--project 1
# UPDATE: You can now drop the --config param
exec sentry cleanup --days 360
This is how you perform a cleanup using a dockerized Sentry instance with a default docker-compose.yml
from the official guide:
$ # Go to a directory containing the docker-compose.yml:
$ cd sentry-config
$ docker-compose run --rm worker cleanup --days 90
Consider reading help:
$ docker-compose run --rm worker help
$ docker-compose run --rm worker cleanup --help
Use cron to perform the cleanup regularly. Run crontab -e
and add there the following line:
0 3 * * * cd sentry-config && docker-compose run --rm worker cleanup --days 30
Don't forget to reclaim the disk space by running VACUUM FULL {{relation_name}};
inside a Postgres container:
$ docker exec -it {{postgres_container_id} /bin/bash
$ psql -U postgres
postgres=# VACUUM FULL public.nodestore_node;
postgres=# VACUUM FULL {{ any large relation }};
postgres=# VACUUM FULL public.sentry_eventtag;
You can run VACUUM FULL;
without specifying a relation, but this will lock the whole database. So I recommend vacuuming relations one-by-one. This is how you can find the size of your biggest relations.
There is a cleanup command. Unfortunately, it's behavior seems undocumented, but the code comments are pretty informative.
To follow on from radtek's helpful answer, if you only want to remove particular errors, the easiest way I've found is to call delete on the Group
object:
from sentry.models import Group
Group.objects.filter(culprit__contains='[SEARCH TERM]').delete()
Where [SEARCH TERM]
is some text that appears within the error messages you want to remove.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With