Building an AI-Powered Content Moderation API with InsForge Edge Functions

06 Mar 202612 minute
Arindam

Arindam

Developer Advocate

Modern applications rely on user-generated content such as comments, reviews, and messages. Platforms must moderate this content to enforce safety policies and maintain compliance. Manual moderation does not scale, so production systems typically rely on automated moderation pipelines powered by AI.

Traditional implementations require multiple backend services. Developers often provision servers, integrate AI APIs, manage databases, and configure storage separately. This fragmented setup increases operational overhead and slows development.

InsForge simplifies this architecture by combining Edge Functions, PostgreSQL Database, Storage, and Model Gateway in a single platform. Benchmarks also show that it can deliver ~1.6x faster responses and 2.4x lower token usage compared to fragmented integrations.

In this tutorial, we will build a production-ready AI moderation API that runs entirely within InsForge.

Architecture Overview

What We Are Building

Here are the tools that we will be using to build a simple backend moderation workflow using InsForge core services:

  • AI Moderation API Endpoint: We will create an API endpoint using Edge Functions that accepts user-submitted text content and processes moderation requests.
  • AI-Powered Content Evaluation: The API will use Model Gateway to access an AI model that classifies submitted content as SAFE or UNSAFE.
  • Database Storage for Approved Content: Approved comments will be stored in a PostgreSQL Database managed by InsForge.
  • Attachment Handling with Storage: Optional user attachments will be uploaded and stored using Storage Buckets.
  • Automated Moderation Response: Unsafe content will be rejected immediately, and the API will return a structured moderation response.
  • Production-Ready Backend Workflow: The moderation pipeline will run entirely within InsForge using Database, Edge Functions, Model Gateway, and Storage, without external servers or additional infrastructure.

Project Setup and Repository Structure

Before configuring the backend resources, clone the project repository and review the project structure.

Clone the repository:

bash
git clone https://github.com/Studio1HQ/Content-moderation-Insforge
cd content-moderation-app

Install dependencies:

bash
npm install

The repository contains both the Next.js frontend and the InsForge Edge Function used for moderation.

Repository Structure

FolderPurpose
src/appNext.js application pages and layouts
src/componentsUI components such as the moderation form
src/libClient utilities for connecting to InsForge APIs
insforge-functions/moderate-commentEdge Function implementation for moderation
handler.tsServerless function that processes moderation requests

This structure keeps the frontend and backend logic organized within the same project while allowing the Edge Function to be deployed independently.

After cloning the repository, proceed with configuring the backend resources in InsForge.

Note: You can set up this backend in two ways. Follow the manual steps in this tutorial to create the database, storage bucket, and Edge Function using the dashboard and CLI. Alternatively, you can use InsForge MCP with your AI coding agent to provision the same resources using a single prompt. See the MCP section at the end of the article for the prompt template and instructions.

Step 1: Setting Up the Database

InsForge provides a managed PostgreSQL Database that you can configure directly from the dashboard.

Open the Tables Section

  • Open your project in the InsForge Dashboard.
  • In the left sidebar, select Tables.
  • Click the + icon next to Tables.
Tables Section

Create the following columns.

ColumnTypeDescription
iduuidPrimary key for each comment
contentstringUser submitted comment text
attachment_urlstringURL for uploaded file (optional)
statusstringModeration result (approved or rejected)
created_attimestampTime when the comment was created

Save the Table

  • Click Create Table to apply the schema.
  • The comments table will appear in the Tables panel.
Comments Table

Step 2: Creating the Edge Function

Next, create the serverless API that will process moderation requests.

InsForge Edge Functions allow you to run backend logic without managing servers. In this tutorial, the function receives user content, evaluates it using AI, and stores approved results in the database.

Navigate to the Edge Function directory in the repository:

bash
insforge-functions/moderate-comment/

Inside this folder, there will be a file named:

bash
handler.ts

This file will contain the moderation logic executed by the Edge Function.

The Edge Function performs the following tasks:

  • Accept a POST request containing user content.
  • Send the content to the AI model through Model Gateway.
  • Classify the content as SAFE or UNSAFE.
  • Upload attachments to Storage if present.
  • Insert approved content into the comments table.
  • Return a structured moderation response.

All moderation logic runs inside the Edge Function, keeping the backend workflow centralized within InsForge.

Deploy the function using the InsForge CLI:

bash
insforge functions deploy moderate-comment --file ./insforge-functions/moderate-comment/handler.ts
Deploy Function

Once deployed, the function becomes available as a backend API endpoint that the frontend application can call.

Function Deployed

Step 3: AI Integration Inside the Function

The moderation logic inside the Edge Function uses Model Gateway, which provides unified access to multiple AI models directly within InsForge.

Model Gateway allows Edge Functions to call AI models without configuring external API clients or managing provider-specific integrations.

Open the Model Gateway section in the InsForge dashboard and enable a model for the project.

For this tutorial, enable:

bash
openai/gpt-4o-mini

This model will be used to classify incoming content during moderation.

Model Gateway

Use the CLI to send a test request to the moderation API.

powershell
insforge functions invoke moderate-comment --data "{\"content\":\"This community platform is very helpful.\"}"

This command sends a JSON payload containing the content field to the Edge Function.

Test Request

The Edge Function also inserts the approved comment into the comments table in the database.

Step 4: Configuring InsForge Storage

The moderation workflow also supports optional file uploads using InsForge Storage. Storage provides an S3-compatible object storage system that integrates directly with Edge Functions and the database.

When a user submits a comment with an attachment, the Edge Function uploads the file to a storage bucket before inserting the comment into PostgreSQL.

Create a Storage Bucket

Open the Storage section in the InsForge dashboard.

  • Navigate to Storage in the sidebar.
  • Click Create Bucket.
  • Name the bucket: attachments

This bucket will store files uploaded with moderated comments.

Storage Bucket

The upload operation returns a public file URL, which is stored in the attachment_url column of the comments table.

The moderation function processes attachments as follows:

  1. The user submits content with an optional file.
  2. The Edge Function evaluates the text using AI moderation.
  3. If the content is classified as SAFE, the file is uploaded to the attachments bucket.
  4. The returned file URL is stored in the comments table.
  5. If the content is UNSAFE, the function rejects the request and no file is uploaded.

This ensures that only approved content and attachments are stored, keeping the storage system aligned with the moderation rules.

Step 5: Building the Next.js UI

The repository already includes a Next.js application that provides a simple interface for interacting with the moderation API.

Navigate to the frontend code inside the src directory.

Key UI Files

File / FolderPurpose
src/app/page.tsxMain page that renders the moderation interface
src/componentsReusable UI components for the moderation workflow
src/lib/insforge.tsUtility for connecting the frontend to the InsForge backend

The UI includes a form where users submit content for moderation.

The form collects:

  • Text content entered by the user
  • Optional file attachment
  • Submit an action that triggers the moderation request

When the user submits the form, the application sends a POST request to the Edge Function endpoint.

Moderation Form

The UI handles the API response and updates the interface accordingly.

  • Approved comments appear in the moderation results section.
  • Rejected content displays an error message.
  • Approved entries are also visible in the comments database table.

This setup creates a complete workflow where the Next.js UI communicates with the InsForge Edge Function to perform moderation in real time.

Using an AI Agent to Build the UI

You can also accelerate this step using an AI coding agent (such as Cursor, Claude Code, or other agent-based tools). Instead of manually writing the UI components, the agent can generate the form, API calls, and component structure based on a prompt.

Example prompt:

text
Create a Next.js page for a content moderation demo.

Requirements:
- A form with a textarea for user comments
- An optional file upload input
- A submit button
- Send a POST request to the InsForge Edge Function endpoint for moderation
- Display the moderation result (approved or rejected) in the UI
- Use React state to handle form submission and responses

Step 6: Testing the API Endpoint

After deploying the Edge Function and setting up the UI, test the moderation workflow to verify that the API behaves correctly.

Submit Safe Content

Enter a comment through the UI and submit the form.

Safe Content Test

Expected behavior:

  • The Edge Function sends the content to the AI moderation model.
  • The model classifies the text as SAFE.
  • The function inserts the comment into the comments table in PostgreSQL.
  • If an attachment is included, the file is uploaded to the attachments storage bucket.
  • The API returns an approved response to the frontend.
Approved Result

Next, test a rejection case.

Unsafe Content Test

Expected behavior:

  • The Edge Function sends the text to the AI moderation model.
  • The model classifies the content as UNSAFE.
  • The function immediately returns a rejection response.
  • No entry is inserted into the comments table.
  • No file is uploaded to Storage.
Rejected Result

The table in your InsForge dashboard also reflects the results:

Dashboard Results

Step 7: Deployment Using InsForge

Once the function and UI are ready, deploy the backend using the InsForge CLI. This publishes the Edge Function and connects it to the project environment.

Refer to the deployment guide here.

Authenticate the CLI with your InsForge account.

bash
insforge auth login

Complete the authentication process in the browser. Link the local project directory to your InsForge backend.

bash
insforge link

Select the project created earlier in the InsForge dashboard. This connects the CLI to the correct backend workspace.

Deploy the Next.js application while passing the required environment variable.

powershell
insforge deployments deploy . --env "{\"NEXT_PUBLIC_INSFORGE_BASE_URL\":\"https://your-project.insforge.app\"}"

This environment variable allows the frontend to communicate with the deployed Edge Function.

Deployment

Verify the Deployment

After deployment, the application becomes accessible via the InsForge-hosted domain.

Live App

Access the live demo here.

Using MCP to Accelerate Development

Instead of manually creating tables, storage buckets, and Edge Functions, you can also configure the backend using Remote MCP (Model Context Protocol).

MCP exposes InsForge backend capabilities as tools that an AI coding agent can call to provision resources automatically. With a single prompt, the agent can generate the database schema, configure storage, and deploy the moderation function.

MCP Setup

Example prompt used to create this backend workflow:

text
Create backend resources for a content moderation application using InsForge.

Requirements:

1. Create a PostgreSQL table named "comments" with fields:
   id (UUID primary key)
   content (text)
   attachment_url (text, nullable)
   status (text)
   created_at (timestamp)

2. Create a storage bucket named "attachments" for storing uploaded files.

3. Create an Edge Function named "moderate-comment" that:
   - accepts POST requests with comment text
   - sends the text to an AI model
   - classifies the content as SAFE or UNSAFE
   - uploads attachments to storage if present
   - inserts approved content into the database

Using MCP, developers can provision backend resources and deploy functions directly from prompts, significantly accelerating backend setup while keeping the same architecture described in this tutorial.

Refer to the quick demo here.

Conclusion

In this tutorial, we built a content moderation API using InsForge Edge Functions, integrated AI-powered classification through Model Gateway, stored approved results in PostgreSQL, and handled optional file uploads with Storage. The entire workflow runs inside InsForge, without external servers or fragmented infrastructure.

This approach demonstrates how developers can combine Edge Functions, AI integration, database services, and storage to implement production-ready backend APIs with minimal operational overhead.

If your application relies on user-generated content, moderation pipelines, or AI-assisted workflows, this architecture provides a straightforward and scalable foundation.

Ready to simplify your backend stack? Explore InsForge's Edge Functions, Model Gateway, PostgreSQL database, and Storage services to build intelligent APIs without managing infrastructure.