As the moderator of a chat app, there are instances where some of your users will be engaging in bad behavior in the chat room. This is especially true if there are lots of users in it. If you don’t do something as the moderator, it will affect the other users as well. That’s why it’s important to have some sort of automated software for monitoring chat rooms to prevent these cases from making any damage.
Basic knowledge of React Native and Node.js is required to follow this tutorial.
The following versions is used for building the app. Be sure to switch to those versions if you encounter any issues in running the app:
We’ll be using Chatkit to implement the chat functionality, so create an account and an app instance if you don’t already have one.
For the profanity filtering, we’ll be using the WebPurify API. They’re free to use for 14 days, you can sign up here. If you want a free solution, you can also use PurgoMalum. I’ll also show you how to use this free solution later on.
Lastly, we’ll use ngrok to expose the server to the internet. You also need an account for that.
We will be updating an existing chat app with the following features already implemented:
We’ll update it so that it will mask the profanities that are present in a text. We’ll also add a webhook for monitoring messages sent in a chat room so that when it detects profanities, the user will be banned from using the service.
Here’s what the app will look like when detecting and masking profanities:
Here’s how it will look like when the user is banned from joining a room:
You can find the complete code on this GitHub repo.
This section assumes that you already have an existing WebPurify account. Once you do, go to your account dashboard and click on the View button:
That should redirect you to the page where you can see the API key that you can use for making requests. Take note of it for now:
As mentioned earlier, we will be updating an existing chat app. To get it, you have to clone the GitHub repo and switch to the starter
branch:
git clone https://github.com/anchetaWern/RNChatkitProfanityFilter.git
cd RNChatkitProfanityFilter
git checkout starter
yarn
react-native eject
react-native link react-native-gesture-handler
react-native link react-native-config
React Native Config has an extra step for Android, go ahead and follow that.
Next, update the .env
and server/.env
file with your Chatkit and your WebPurify API credentials:
// .env
CHATKIT_INSTANCE_LOCATOR_ID="YOUR CHATKIT INSTANCE LOCATOR ID"
CHATKIT_SECRET_KEY="YOUR CHATKIT SECRET KEY"
CHATKIT_TOKEN_PROVIDER_ENDPOINT="YOUR CHATKIT TOKEN PROVIDER ENDPOINT"
WEB_PURIFY_API_KEY="YOUR WEB PURIFY API KEY"
// server/.env
CHATKIT_INSTANCE_LOCATOR_ID="YOUR CHATKIT INSTANCE LOCATOR ID"
CHATKIT_SECRET_KEY="YOUR CHATKIT SECRET KEY"
CHATKIT_WEBHOOK_SECRET="YOUR WEBHOOK SECRET KEY"
Now we’re ready to start updating the app. We’ll first update the two screens: Chat and Rooms, then we’ll proceed to updating the server.
Start by importing the modules that we need:
// src/screens/Chat.js
import { ActivityIndicator, View, Alert } from 'react-native'; // add Alert
// ...
import Config from 'react-native-config';
import axios from 'axios'; // add this
Read the WebPurify API key value from the .env
file:
const CHATKIT_TOKEN_PROVIDER_ENDPOINT = Config.CHATKIT_TOKEN_PROVIDER_ENDPOINT;
const WEB_PURIFY_API_KEY = Config.WEB_PURIFY_API_KEY; // add this
When the user sends a message, this is where we make a request to the API before actually sending it. This way, the message that will be sent will already have the profanities masked. The WebPurify API expects the request options to be submitted as query parameters:
method
- the API method you want to use. In this case, we want the API to mask the profanities in the text. The webpurify.live.replace
method fits this perfectly. You can also check their API documentation for other methods.api_key
- the API key that you got earlier.format
- by default, the API returns an XML response so we have to specify json
because it’s easier to work with.replacesymbol
- the character to use to replace the profane text. Each character in the text will be replaced with this symbol.text
- the original text message to be sent by the user.lang
- the language of the original text. This is optional if you only need to work with the english language. But if you’re expecting your users to be speaking a different language, you have to specify the language because the profanities might not be detected if you rely on the default language.Once we get a response back, we simply check for the existence of the symbol (*) in the filtered text to determine if there’s a profanity in the text. Alternatively, you can also check the found
key in the response as that contains the number of profanities found in the text. We’re opting for the checking of the symbols existence so you can easily replace it with the PurgoMalum API which I mentioned earlier.
When profanities are detected, we warn the user that they will get banned if they continue engaging in their bad behavior:
onSend = async ([message]) => {
try {
const response = await axios.get(
"http://api1.webpurify.com/services/rest/",
{
params: {
method: 'webpurify.live.replace',
api_key: WEB_PURIFY_API_KEY,
format: 'json',
replacesymbol: '*',
text: message.text
}
}
);
const filtered_text = response.data.rsp.text;
if (filtered_text.includes('***')) { // or response.data.rsp.found > 0
Alert.alert("Profanity detected", "You'll only get 2 warnings. After that, you will be banned from using the service.");
}
const message_parts = [
{ type: "text/plain", content: filtered_text }
];
await this.currentUser.sendMultipartMessage({
roomId: this.room_id,
parts: message_parts
});
} catch (send_msg_err) {
console.log("error sending message: ", send_msg_err);
}
}
Here’s a sample response that you can get from the API:
{
"rsp":{
"@attributes":{
"stat":"ok"
},
"method":"webpurify.live.replace",
"format":"rest",
"found":"3",
"text":"im tired of your ********",
"api_key":"YOUR API KEY"
}
}
If you want to use the PurgoMalum API, use the following code instead. This will also replace the profanities in text with asterisk by default:
const response = await axios.get(
"https://www.purgomalum.com/service/json",
{
params: {
text: message.text,
}
}
);
Here’s a sample result that you might get:
{
result: "im tired of all your ********"
}
So you need to replace your response extraction code with this:
const filtered_text = response.data.result;
Next, we need to update the code for initializing the connection to Chatkit so that it listens for the event when the current user is removed from the room. We need this because when the user has accumulated three offenses, they will be automatically removed from the room. We want to inform the user when that happens.
To implement the listener, we can use Chatkit’s Connection Hooks. The event that we need to listen for is onRemovedFromRoom
. The function that you pass here will get executed when the current user is removed from the current room:
async componentDidMount() {
try {
const chatManager = new ChatManager({
instanceLocator: CHATKIT_INSTANCE_LOCATOR_ID,
userId: this.user_id,
tokenProvider: new TokenProvider({ url: CHATKIT_TOKEN_PROVIDER_ENDPOINT })
});
let currentUser = await chatManager.connect({
onRemovedFromRoom: this.onRemovedFromRoom // add this
});
this.currentUser = currentUser;
await this.currentUser.subscribeToRoomMultipart({
roomId: this.room_id,
hooks: {
onMessage: this.onReceive
},
messageLimit: 10
});
await this.setState({
room_users: this.currentUser.users
});
} catch (chat_mgr_err) {
console.log("error with chat manager: ", chat_mgr_err);
}
}
Here’s the code for the onRemovedFromRoom()
function. Since they’re no longer a member of the room, we’ll simply disconnect them and navigate back to the login screen:
onRemovedFromRoom = (room) => {
this.currentUser.disconnect();
Alert.alert("Kicked out", "You've been kicked out of the room because of bad behavior.")
this.props.navigation.navigate("Login");
}
For the Room screen, all we have to do is replace the console.log()
with an alert. When a user fails to join a room, it only means they got banned. When the user commits the third offense, they can also no longer join any room. We’ll update the code for the /user/join
route later so it checks for the user’s offense count:
// src/screens/Rooms.js
joinRoom = async (room) => {
try {
await axios.post(`${CHAT_SERVER}/user/join`, { room_id: room.id, user_id: this.user_id });
Alert.alert("Joined Room", `You are now a member of [${room.name}]`);
this.goToChatScreen(room);
} catch (join_room_err) {
Alert.alert("Cannot join", "You are banned."); // update this
}
}
Now we’re ready to update the server. It already contains the code for implementing the basic chat functionality (getting the list of rooms and joining a room). All we have to do now are the following:
Start by importing the additional module that we’ll need. This is the built-in crypto
module in Node.js. It will allow us to calculate the signature and compare it with the one sent from the server who made the request:
// server/index.js
const Chatkit = require("@pusher/chatkit-server");
const crypto = require("crypto"); // add this
Next, add the CHATKIT_WEBHOOK_SECRET
. At this point, it doesn’t have a value yet because we haven’t added the corresponding webhook on the Chatkit app instance. We’ll add it later once we run the app:
const CHATKIT_SECRET_KEY = process.env.CHATKIT_SECRET_KEY;
const CHATKIT_WEBHOOK_SECRET = process.env.CHATKIT_WEBHOOK_SECRET; // add this
To handle the requests triggered from Chatkit’s webhook, we need to use the bodyParser.text()
Express middleware. This allows us to parse the request body as plain text. Because as you’ll see later, the function for verifying the webhook request (verifyRequest()
) requires the plain request body. We’ll still use bodyParser.json()
for anything else:
// app.use(bodyParser.json()); // remove this
app.use(
bodyParser.text({
type: (req) => {
const user_agent = req.headers['user-agent'];
if (user_agent === 'pusher-webhooks') {
return true;
}
return false;
},
})
);
app.use(
bodyParser.json({
type: (req) => {
const user_agent = req.headers['user-agent'];
if (user_agent !== 'pusher-webhooks') {
return true;
}
return false;
}
})
);
You can read more about Chatkit Webhooks here.
Here’s the verifyRequest()
function. This will return true
if the request really came from Chatkit’s server. We’ll use it later in the /webhook
route:
const verifyRequest = (req) => {
const signature = crypto
.createHmac("sha1", CHATKIT_WEBHOOK_SECRET)
.update(req.body)
.digest("hex")
return signature === req.get("webhook-signature")
}
Next, add the functions that we’ll use to easily get a user’s data, update a user, and remove a user from a room. We’ll use these functions later in the /webhook
route and /user/join
route:
// for getting a user's data
const getUser = async(user_id) => {
try {
const user = await chatkit.getUser({
id: user_id,
});
return user;
} catch (err) {
console.log("error getting user: ", err);
}
}
// for setting a user's profanity_count
const updateUser = async(user_id, profanity_count) => {
try {
const user = await chatkit.updateUser({
id: user_id,
customData: {
profanity_count
}
});
return user;
} catch (err) {
console.log("error getting user: ", err);
}
}
const removeUserFromRoom = async(room_id, user_id) => {
try {
await chatkit.removeUsersFromRoom({
roomId: room_id,
userIds: [user_id]
});
} catch (err) {
console.log("error removing user from room: ", err);
}
}
Next, add the route for processing the webhook. This will first verify if the request really came from Chatkit using the verifyRequest()
function that we’ve added earlier. It will then extract the relevant details from the request payload. Note that we’re now dealing with plain text so we have to convert it to an object using JSON.parse()
. From there, we get the user’s current profanity_count
and increment it if the message has triple asterisk in it. If the user’s profanity_count
becomes greater than two then we remove the user from the room:
app.post('/webhook', async (req, res) => {
try {
if (verifyRequest(req)) {
const { payload } = JSON.parse(req.body);
const sender_id = payload.messages[0].user_id;
const room_id = payload.messages[0].room_id;
const sender = await getUser(sender_id);
const message = payload.messages[0].parts[0].content;
let profanity_count = (sender.custom_data) ? sender.custom_data.profanity_count : 0;
if (message.includes('***')) {
profanity_count += 1;
await updateUser(sender_id, profanity_count); // increment the profanity_count
}
if (profanity_count >= 3) {
await removeUserFromRoom(room_id, sender_id);
}
}
} catch (err) {
console.log('webhook error: ', err);
}
res.send('ok');
});
You can find a sample request data of the message.created webhook here.
When a user joins a room, we first want to check whether their profanity_count
hasn’t got them banned yet. If it’s less than three then it’s still ok. Otherwise, we simply return an error:
app.post("/user/join", async (req, res) => {
const { room_id, user_id } = req.body;
try {
const user = await getUser(user_id); // get user data
if (user.custom_data && user.custom_data.profanity_count >= 3) { // check number of offenses
res.status(403).send('Cannot perform action');
} else {
await chatkit.addUsersToRoom({
roomId: room_id,
userIds: [user_id]
});
res.send('ok');
}
} catch (user_permissions_err) {
console.log("error getting user permissions: ", user_permissions_err);
}
});
Lastly, you can add an /unban
route that you can use for resetting a specific user’s profanity_count
. This will allow you to test if a user can go back to doing what they can previously do:
app.get("/unban", async (req, res) => {
const { user_id } = req.query;
await updateUser(user_id, 0);
res.send('ok');
});
At this point, you’re now ready to run the app. In this section, we’ll run the app and add the webhook to the Chatkit app.
First, run the server and expose it to the internet:
node server/index.js
~/ngrok http 5000
Next, replace the value of CHAT_SERVER
with your ngrok HTTPS URL:
// src/screens/Chat.js
const CHAT_SERVER = "YOUR NGROK HTTPS URL";
For the webhooks to work, you have to add the endpoint (/webhook
) to your Chatkit app settings. Enable the Message Created trigger so Chatkit’s server will send a request to your endpoint everytime a new message is sent. Don’t forget to save the changes and update your server/.env
file with the value of webhook secret that you’ve added:
While you’re here, you can also go ahead and set up two users (the admin and the user used for testing) and a room. Be sure that the admin user is the one that creates the room, not the user you will be using for testing. This is because the user will be removed from the room. It’s possible to remove the room’s creator from the room, but it’s not really recommended especially if you have permissions in place:
Finally, you can now run the app:
react-native run-android
react-native run-ios
You can test out the profanity filtering by sending “colorful words”. On the third offense, you’ll be kicked out of the room and you won’t be able to join or enter any other rooms because the profanity_count
will be checked beforehand. The only way to unban the user is to manually update the value of profanity_count
. You can do that by hitting http://localhost:5000/unban?user_id=YOUR_USER_ID
on your browser.
In this tutorial, you learned how to implement profanity moderation in a React Native chat app. Specifically, you learned how to use the WebPurify API and PurgoMalum API to easily filter out profanities within a text. You also learned how to use Chatkit webhooks to monitor messages sent in a chat room and ban users who continue engaging in bad behavior.
Further reading:
☞ 22 Miraculous Tools for React Developers in 2019
☞ Building Bliss- Serverless Fullstack React with Prisma 2 and GraphQL
☞ React Project Tutorial - Beach Resort (with Contentful and Netlify)
☞ React Component to Draw using Hooks and Typescript
☞ How to add Google sign-in to your Web app
☞ Build a Real Time Chat App With Node.js
☞ Getting Started with Node.js - Full Tutorial
☞ React Node.js Video Sharing App Full Tutorial (Redux, JWT, Cookies) | MERN Stack Youtube Clone
☞ React Node.js Booking App Full Tutorial | MERN Stack Reservation App (JWT, Cookies, Context API)
☞ How To Create A Password Protected File Sharing Site With Node.js, MongoDB, and Express