May 2017

Volume 32 Number 5

[Cognitive Services]

Protect Web Apps Using Microsoft Content Moderator

By Maarten Van De Bospoort, Sanjeev Jagtap | May 2017

Every day billions of users take pictures and videos to share on social media. As anyone who’s dealt with user-generated content on the Internet knows, the net’s anonymity doesn’t necessarily surface the prettiest human behavior. 

Another important recent trend is the proliferation of chat bots. Not a day goes by without the release of a new set of bots that can do everything from booking travel to coordinating your meetings to online banking. While those bots are useful without a doubt, the killer chatbot is still elusive: the one bot that all the messaging platforms want in order to crack the goal of 1 billion daily active users.

Now let’s imagine you created just that: Butterfly, the bot that everyone feels compelled to engage with. Users can share media with your bot and through your secret machine learning algorithm, the bot will predict their future for the next 24 hours. After a year of hard work, you released your bot. Overnight, Butterfly went viral. Unfortunately, your startup dream quickly turns into a public relations nightmare. Users are submitting adult and racy content, which is then shared and publicly available to other bot users. And some of the content is bad. Really bad. Users are suing you; the phone is ringing off the hook; and you receive threats that your Web service will be shut down. You need a powerful solution to help detect and prevent the bad content from being visible to other users. And you need it quick.

That’s where Microsoft Content Moderator comes to the rescue!

In this article, we’ll show you how Content Moderator can help. We’ll start by creating a chatbot using the Microsoft Bot Framework, but keep in mind that the information applies equally to any Web or client application. Butterfly will enable end users to share text, images and videos, and will use Content Moderator to filter out the inappropriate material before it gets published. Along the way, you’ll learn how to configure custom Content Moderator workflows and to adjust the thresholds for the content classifiers. We’ll also discuss the different connectors that can be used in the workflow, such as Text and Child Exploitation. Let’s start with an overview of content moderation.

Content Moderation

Microsoft has a long and proven track record combatting digital crime. The Microsoft Digital Crimes Unit works hard to take down botnets, limit tech support fraud, thwart phishing schemes and more. One less-visible active area is how the unit assists law enforcement worldwide with curbing child exploitation. Microsoft has offered PhotoDNA as a free service since 2009, and the same team now also presents content moderation.

Content moderation is an area of machine learning where computers can help humans tremendously. The amount of data generated by users is simply too much for humans to review quickly and in a cost-effective way. More important, content moderation isn’t a pleasant activity for humans. For a little background, see tcrn.ch/2n1d9M0.

Content Moderation is a member of the growing set of the Microsoft Cognitive Services APIs that run in Azure. These APIs are all specific implementations of machine learning models, which means Microsoft has trained these models with lots of data. As a developer, you just call one of the APIs to get a result, whether for computer vision, speaker recognition or language understanding, to name but a few. The Content Moderator Image API uses image recognition—an area of machine learning where a lot of progress has been made in recent years.

Figure 1 shows how the Content Moderator pipeline is set up. Depending on your needs, content moderation offers different APIs to call, including moderation, review and jobs in increasing levels of customizabil­ity. For example, the workflow API allows you to programmatically modify workflows that are used for jobs. Next, you can see the different classifiers for image, text, and video, and CSAM, which stands for Child Sexual Abuse Material. PhotoDNA is the technology that helps organizations fight the spread of these images. The CSAM “classifier” works a little differently from the ones I mentioned previously: PhotoDNA uses a hash and match technology that compares against a database of known images. Within Content Moderator, you can set up workflows that connect several filters (for example, first check for CSAM, then for Adult/Racy in images). The workflow can call out to humans for review, as well. Finally, the Content Moderator pipeline is flexible and in the future other APIs can be hooked in, as well.

User Content Flows Through Content Moderator, Where Machine Learning Models and Humans Work Together to Filter out Indecent Material
Figure 1 User Content Flows Through Content Moderator, Where Machine Learning Models and Humans Work Together to Filter out Indecent Material

Moderate Your Users’ Content

Now that you have some understanding of the moderation technology, let’s plug it into our Butterfly bot. We’ll build the bot with the Microsoft Bot Framework, using the Node.js flavor of the Bot Framework for Butterfly. Because all of these APIs are mere REST API calls, you could moderate your content just as easily from C#; actually, this could arguably even be easier as there’s a .NET SDK for Content Moderator (bit.ly/2mXQqQV). 

There have been several articles in this magazine that give excellent overviews of bots. If you haven’t built one yet, I highly recommend checking out the following:

Alternatively, the quick starts on dev.botframework.com will have you up and running in no time.

Here, we’ll use the simple starter solution for Node.js. We use the dialog model that the Node.js framework provides to separate the conversation into separate dialogs. In the dialog code shown in Figure 2, the bot prompts the user for a URL to a picture in the first function. Control is then passed back to the user. When the user sends some text, the dialog flow passes the user input to the second function. The bot then forwards the input for evaluation in moderate.js. In our first attempt, we call the simple Moderator API (as opposed to the more sophisticated Review and Job APIs).

Figure 2 Dialog Code

bot.dialog('/moderateImage', [
  function (session, args) {
    builder.Prompts.text(session, 'Give me a picture URL to moderate, please.');
  },
  function (session, results){
    var input = session.message.text;
    var cm = require('./moderate.js');
    cm( 'ImageUrl', input, function(err, body) {
      if (err) {
        session.send('Oops. Something went wrong.');
        return;
      }
      var output = JSON.stringify(body);
      session.endDialog(output);
    });
  }
]);

To call the Content Moderator API, you need credentials, which you can get from either the Content Moderator Web site or from Azure. Here, we’ll take the latter approach. In the Azure Portal (portal.azure.com), create a new Cognitive Services account by clicking on the green plus sign and specifying Cognitive Services. After you click Create, specify Content Moderator as the API type (see Figure 3). Use the F0 tier because it’s free and allows for one call per second, which should be enough to play around with for now. Once the account is created, you’ll find the Account Name and Key under Keys in Resource Management. Note that you’ll use one of the Keys and the Resource ID string (from Properties) to hook up the Content Moderator API to the Content Moderator review portal later.

Select Content Moderator from the Cognitive Services List
Figure 3 Select Content Moderator from the Cognitive Services List

The Azure Portal also shows you the endpoint as https://westus.api.cognitive.microsoft.com/contentmoderator. While that’s the correct base address, it’s a little too short. The full endpoint is in the Content Moderator documentation.

As shown in Figure 4, we specify “URL” as DataRepresentation to send the URL to the picture, but you can just as easily send the image in a blob. Once you’ve called the moderator API, the body of the returned result contains JSON with the scores for the image. The scores range from 0.0 (innocent) to 1.0 (very adult/racy).

Figure 4 Sending a Picture URL to Content Moderator

var unirest = require("unirest");
var url = "https://westus.api.cognitive.microsoft.com/contentmoderator/
  moderate/v1.0/ProcessImage/Evaluate";
module.exports = function(input, cb) {
  unirest.post(url)
    .type("json")
    .headers({
      "content-type": "application/json",
      "Ocp-Apim-Subscription-Key":<Key from the Azure portal>,
    })
    .send({
      "DataRepresentation": "URL",
      "Value": input
    })
    .end(function (res) {
      return cb(res.error, res.body );
    });
};

You can see that the result, shown in Figure 5, contains the adult and racy prediction scores and the conclusions about whether it made the threshold in the classified flags. Happy as a clam, you throw these few lines of code in your bot and block all the content that’s racy or adult. You deploy the new version of the bot to Azure and users stream in again to consult your oracle. Phew. Happy with your results, you and your team gather around your kegerator to celebrate with a fresh beverage.

Figure 5 Adult and Racy Classification Scores for an Image

"AdultClassificationScore":0.0324602909386158,
"IsImageAdultClassified":false,
"RacyClassificationScore":0.06506475061178207,
"IsImageRacyClassified":false,
"AdvancedInfo":[],
"Result":false,
"Status":{
  "Code":3000,
  "Description":"OK",
  "Exception":null},
"TrackingId":"WU_ibiza_4470e022-4110-48bb-b4e8-7656b1f6703f_
  ContentModerator.F0_3e598d6c-5678-4e24-b7c6-62c40ef42028"

Barely two sips into your micro-brewed IPA, tweets are keeping your phone on constant vibrate. There’s a flood of angry customers: “Why are you blocking my innocent pictures?” “My brother can get pictures with substantially more skin through. You need to fix this!”

Image classification is good, but there’s no one size fits all. The bare Content Moderator APIs we just used are clearly able to assist humans to make good decisions, but they’re not perfect. One improvement we could’ve made is to fine-tune the moderation by using the raw scores instead of the true/false adult-and-racy classifications. Additionally, it appears that users tend to use the same images repeatedly. Fortunately, Content Moderator provides a List API to manage a custom set of images or text you’ve already screened. The moderator API does some fuzzy matching against the images, to prevent users from easily fooling it with slight modifications or resizing. This is a nice enhancement over the first approach, but it wouldn’t rule out those false positives the help desk had to contend with. As always, the optimal solution is found when humans and machines work as a team on the more difficult cases. Computer Vision can help detect the extremes when images are clearly racy or adult, or clearly not. For the edge cases in between, we as humans can decide on which side of the fence the content falls for our particular scenario. This is where the Content Moderator review tool and API really shine. Let’s look at how we can use it to improve our solution.

Calling the Moderator Review API

The approach up to now has been straightforward: Send a picture and block or allow it based on the Content Moderator labels. Now we’re going to expand the solution. The idea is to set up a flow as shown in Figure 6.

The Butterfly Bot Working with a Content Moderator Workflow
Figure 6 The Butterfly Bot Working with a Content Moderator Workflow

In this scenario, the user first sends an image to the Butterfly bot. In Step 2, the bot’s Web service sends the picture to the Content Moderator using the Review API’s Job operation, which takes a workflow Id as a parameter. We’ll set up this workflow in the review tool. Our specific workflow (Step 3) will immediately allow all pictures that are below a certain adult/racy score (for example, 0.7) and flag others that exceed a certain limit (such as 0.9). Our bot will allow the content with the low scores and block the content when it’s clearly too racy or adult. In the gray area in between, we want the content to go to the review tool for human moderators to inspect (Step 4). Our team of reviewers can then decide how to deal with the content. When they’re done reviewing, the Content Moderator will call our bot’s app service back to share the result. At that point, the bot can take down the content if it has been flagged as offensive. Note the flexibility here. You can adjust the scores in your workflow and the reviewers can decide what’s appropriate for your specific app.

To get started, you’ll need to sign up for the Content Moderator review tool at bit.ly/2n8XUB6. You can sign up with your Microsoft account or create a local account. Next, the site asks you to create a review team, whose purpose is to review gray-area content. You can create multiple sub teams and create workflows that assign reviews to different sub teams. In the credentials tab of the portal’s Settings page, you can link up your Content Moderator settings with the Azure Cognitive Services resource you created previously. Just copy the Key and Resource ID from the Azure portal to the Subscription Key and Resource ID settings in the Moderator UI. When you first create your account, you get an auto-configured “default” workflow. As you can see in the Review UI, this workflow will create a human review if an image is found to be adult. Let’s start by using this workflow in the Review API’s Job operation.

To call the Review Job API, you use the code shown in Figure 7.

Figure 7 Calling the Review Job API

var url = 'https://westus.api.cognitive.microsoft.com/contentmoderator/
  review/v1.0/teams/butterfly/jobs';
var req = unirest.post(url)
  .type("application/json")
  .query({
    ContentType: 'Image',
    ContentId: '67c21785-fb0a-4676-acf6-ccba82776f9a',
    WorkflowName: 'default',
    CallBackEndpoint: 'https://butterfly.azure.com/review'
  })
  .headers({
    "Ocp-Apim-Subscription-Key": <ocp_key>
  })
  .send({
    "ContentValue": pictureUrl
  })
  .end(function (res) {
    return callback(res.error, res.body );
});

Note that the URL contains the team name Butterfly and the postfix jobs. In CallBackEndpoint we specify the REST endpoint that Content Moderator will call to notify the review results. We also specify a unique ContentId so we can correlate the image when Content Moderator calls us back and we send the actual image URL in ContentValue. When the call succeeds, the body of the result doesn’t contain any Content Moderator result. Instead, it returns the JobId:

{"JobId":"2017035c6c5f19bfa543f09ddfca927366dfb7"}

You’ll get the result through the callback you specify in CallBackEndpoint. This result will again have the JobId, potentially a ReviewId, and a ContentId so you can cross reference it. For the default workflow, Content Moderator will call back immediately with the result in Metadata if the image isn’t considered adult. The actual JSON will look similar to what’s shown in Figure 8.

Figure 8 Default Workflow Results

{
  "JobId": "2017035c6c5f19bfa543f09ddfca927366dfb7",
  "ReviewId": "",
  "WorkFlowId": "default",
  "Status": "Complete",
  "ContentType": "Image",
  "CallBackType": "Job",
  "ContentId": "67c21785-fb0a-4676-acf6-ccba82776f9a",
  "Metadata": {
    "adultscore": "0.465",
    "isadult": "False",
    "racyscore": "0.854",
    "isracy": "True"
  }
}

The status for this Job is set to Complete and the CallbackType is Job. If, however, the image is considered adult material, Content Moderator will create a review and populate the ReviewId field with an identifier. The image will then end up in the Review UI for your review team (see Figure 9).

The Content Moderator Review Tool with Pictures and Unselected Tags
Figure 9 The Content Moderator Review Tool with Pictures and Unselected Tags

The review tool and its use would benefit from a bit of explanation. The tool is designed for handling large volumes of images. A reviewer looks at all the pictures on a screen, tags the ones that don’t pass muster and then moves to the next screen. The tool gives the reviewer a few seconds to go back in case he thinks he made a mistake. After those few seconds, Content Moderator saves the images with the final tags and calls the callback function we specified again, now with the final judgement. We can now take appropriate action—either taking down the content or publishing it based on our business requirements. The second call back will look like what’s shown in Figure 10.

Figure 10 Results of the Review Callback

{
  "ReviewId": "201703367f430472074c1fb18651a04750448c",
  "ModifiedOn": "2017-03-07T18:34:17.9436621Z",
  "ModifiedBy": "Bob",
  "CallBackType": "Review",
  "ContentId": "67c21785-fb0a-4676-acf6-ccba82776f9a",
  "ContentType": "Image",
  "Metadata": {
    "adultscore": "0.465",
    "isadult": "False",
    "racyscore": "0.854",
    "isracy": "True"
  },
  "ReviewerResultTags": {
    "a": "True",
    "r": "True",
    "pt": "False"
  }
}

The CallBackType is now Review instead of Job and you can see the added ReviewerResultTags, while ContentId and ReviewId match the results from the first callback.

Custom Workflows

Now that we have a good understanding of the default workflow, we can start turning some knobs and dials. For Butterfly, we want to allow everything with a racy score less than 0.7, but block anything with a racy score higher than 0.9. For anything in between, we want the review team to take a second look. Therefore, in the workflow editor, we’ll create a new workflow.

You’ll see that there are lots of options in the dropdowns for Connect to. These options allow you to build advanced workflows with, for example, the Optical Character Recognition (OCR) Text from Images and Face detection APIs. The tool also allows you to declare a callback endpoint for a review. If you specify a callback in the CallBackEndpoint to the Job API, as well as here, the one in the workflow overrides the CallBackEndpoint.

Now, when you call the Review Job API and specify this workflow, you’ll get a JobId back, just like when you called the default workflow. Depending on the racy score of your picture (between 0.7 and 0.9 in our case), Content Moderator will again create a review and you’ll see those images in the Content Moderator review UI.

There are two final notes about workflows. First, if the picture doesn’t qualify for a review in the initial Job callback, we still must find out whether the picture was on the high end and needs to be blocked, or on the low end and is allowed. To do this, you must duplicate the logic a bit, and that means things can get out of sync. Fortunately, the review tool exposes the workflow as JSON. Even better, there’s a Workflow REST API you can use to submit the workflows to the Content Moderator API. With a bit of plumbing you can use the same JSON to keep your bot’s logic and the Review UI tool in sync.

A second note concerning the workflows relates to their extensibility. A focal point for the team is to make the review tool a common destination for various APIs. When you navigate to the Connectors tab in the review UI, you can see the currently available connectors. You can activate these connectors by entering the corresponding subscription keys. The case for PhotoDNA is easy to make. If your product gets any sort of user content, you want to make sure that no child exploitation images are being shared. Hooking this up to an existing workflow is easy once you’ve signed up for the service. It surely beats having to call the REST APIs separately. At the time of this writing, the Text Analytics API and the Face Detection API are available as connectors. For those you can go to the Azure Portal, create a Cognitive Service as we did earlier and enter the subscription key in the Content Moderator UI.

Wrapping Up

There are other advanced features we didn’t have space to dig into. For example, you can create your own Tags under Settings to use in your workflows. We created a “pt” tag for tagging profanity in text. We’ll use this in a workflow that’s set up for text content moderation. Additionally, workflows have alternate inputs to handle situations where the input format doesn’t match a qualifier. For example, when you need to detect text profanity in an image through OCR. You can sign up for Video Moderation, which is currently in private preview. Finally, you can expect more connectors to show up in the portal that you’ll be able to use to build and extend your workflows.

Using Content Moderator allows you to scale out your content moderation capabilities to all media formats across large volumes. Content Moderator is a platform—APIs and solutions that we’re building specifically for the content moderation vertical. Using it, you can scale and transition into other media formats and new content moderation capabilities as they become available. Content Moderator uses the best machine learning-based classifiers and other technologies that are getting better all the time. Improvements in the classifiers will automatically improve your results. 


Maarten van de Bospoort is a principal software development engineer in Developer Experience at Microsoft in Redmond. He works with developers and architects from large consumer ISVs to facilitate adoption of Microsoft technologies such as bots, cognitive services and occasionally a Universal Windows Platform app.

Sanjeev Jagtap is a senior product manager in the Content Moderator team at Microsoft in Redmond. He is passionate about customers, Microsoft technologies and Hackathons.

Thanks to the following Microsoft technical expert for reviewing this article: Christopher Harrison and Sudipto Rakshit


Discuss this article in the MSDN Magazine forum