Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apple guidelines 14.3, "ability to block abusive users " [closed]

The full rule is the following:

Apps that display user generated content must include a method for
filtering objectionable material, a mechanism for users to flag 
offensive content, and the ability to block abusive users from the service

My app does feature user generated content. I have a feature that enables users to flag a post if they find it objectionable. I receive notifications in the database when this happens, and I personally judge if the content should be deleted or not. However, there is no feature that enables a user to block another user directly. There is no "following" or "friend requests" in my app, it is more like a communal forum where you read others content and can post your own content, but don't directly follow others.

my method of blocking others from the service is deleting user accounts and their associated posts from the database. Is what I have enough? I find the wording kind of ambiguous in the guidelines.

like image 557
jjjjjjjj Avatar asked Oct 14 '15 19:10

jjjjjjjj


2 Answers

The accepted answer is no longer true. I just had an app rejected because there is no mechanism for users to block other users. We already have a user-driven content flagging system, and demonstrated that there was a process in place for reviewing and removing objectionable content and blocking abusive users from the service altogether, but Apple said:

In addition to the reporting mechanism, it would be appropriate to implement a separate blocking mechanism that allows a user to block abusive users.

…where “it would be appropriate to” apparently means “your app is rejected until you.”

We indicated that our staff block users from the whole service if they post abusive content, but Apple says that this is not sufficient; users must now be able to directly block each other.

This is section 1.2 of the updated review guidelines.

like image 70
Paul Cantrell Avatar answered Oct 23 '22 14:10

Paul Cantrell


You'll be okay. Our team created an app which allowed users to post video content in a communal context. We were required to implement a flagging system (which it sounds like you have), and show that we had a process in place to deal with the flagged content. Once we did that, we were approved.

like image 37
Adam G Avatar answered Oct 23 '22 16:10

Adam G