Create moderation
POSThttps:/api.openai.com/v1/moderations
Classifies if text and/or image inputs are potentially harmful. Learn more in the moderation guide.
Request body
inputstring or array
Input (or inputs) to classify. Can be a single string, an array of strings, or an array of multi-modal input objects similar to other models.inputstringRequiredDefaults:
A string of text to classify for moderation.inputarrayRequired
An array of strings to classify for moderation.itemsstringDefaults:
inputarrayRequired
An array of multi-modal inputs to the moderation model.itemsobject
An object describing an image to classify.typestringRequired
Always
image_url.image_urlstring
image_urlobjectRequired
Contains either an image URL or a data URL for a base64 encoded image.urlstringRequired
Either a URL of the image or the base64 encoded image data.
itemsobject
An object describing text to classify.typestringRequired
Always
text.textstring
textstringRequired
A string of text to classify.
modelstring
The content moderation model you would like to use. Learn more in the moderation guide, and learn about available models here.
modelstring
modelstring
omni-moderation-lateststring
omni-moderation-2024-09-26string
text-moderation-lateststring
text-moderation-stablestring
Response
A moderation object.
Example request
1 curl https://api.openai.com/v1/moderations \2 -H "Content-Type: application/json" \3 -H "Authorization: Bearer $OPENAI_API_KEY" \4 -d '{5 "input": "I want to kill them."6 }'
Example response
1 {2 "id": "modr-AB8CjOTu2jiq12hp1AQPfeqFWaORR",3 "model": "text-moderation-007",4 "results": [5 {6 "flagged": true,7 "categories": {8 "sexual": false,9 "hate": false,10 "harassment": true,11 "self-harm": false,12 "sexual/minors": false,13 "hate/threatening": false,14 "violence/graphic": false,15 "self-harm/intent": false,16 "self-harm/instructions": false,17 "harassment/threatening": true,18 "violence": true19 },20 "category_scores": {21 "sexual": 0.000011726012417057063,22 "hate": 0.22706663608551025,23 "harassment": 0.5215635299682617,24 "self-harm": 2.227119921371923e-6,25 "sexual/minors": 7.107352217872176e-8,26 "hate/threatening": 0.023547329008579254,27 "violence/graphic": 0.00003391829886822961,28 "self-harm/intent": 1.646940972932498e-6,29 "self-harm/instructions": 1.1198755256458526e-9,30 "harassment/threatening": 0.5694745779037476,31 "violence": 0.997113466262817432 }33 }34 ]35 }
Built with