I am now ready to submit my app and I'm reading Apple's App Store Guidelines that I'm required to have a method for filtering objectionable material from being posted to my app. In my case, I believe this means I have to have a method for filtering the chat/posts to make sure people cannot bully each other or post pornographic pictures in the chat.
Has anyone ever encountered this before? Any recommendation on the best way to proceed? Perhaps there is a way to add a list of objectionable words and phrases to the chat and/or firebase to be able to prevent certain objectionable things from being said? Any pre-existing filters you can import? I'm using firebase.
I really have no idea how to solve this. Thanks for the comments.
I have had an app rejected for not providing a way to hide content if a user deems not suitable.
You can add a “do not show me again” action and also you must add a reporting system for users to flag any abusif content.
In my case I added two buttons : hide and report.
Any hidden content is applied for that user. For reported content, if a content gets three reports, that content gets hidden from the whole community.
This was my way of doing it, you can come up with your own vision.
Apple will also want you to address this issue in the terms of use that the users must accept when using your app, you most likely add a checkbox on the signup screen that a user has read and accepted the terms of user, and you provide your terms of use either through and external url or a dedicated screen.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With