This guide outlines a powerful, flexible, and cost-effective technology stack for Indie Hackers looking to build AI-powered applications. It leverages Hugging Face for models, OpenRouter for LLM access, n8n for backend automation, Elest.io for hosting self-managed services, and Next.js for the frontend. This combination prioritizes budget-friendliness, minimal operational overhead, and scalability.
The Proposed Stack
- Frontend:Next.js (hosted on Vercel)
- Why: Robust React framework for building fast, modern web applications. Excellent developer experience, server-side rendering (SSR), static site generation (SSG), API routes, and easy deployment with Vercel.
- AI Model Access & Fine-Tuning:Hugging Face Ecosystem
- Transformers.js: For running selected NLP/CV models directly in the browser or in Next.js API routes (serverless functions) for lightweight tasks.
- Hugging Face Hub: Access to pre-trained models and datasets.
- PEFT & AutoTrain: For cost-effective fine-tuning of models on custom data.
- Hugging Face Inference Endpoints: For deploying custom or larger pre-trained models as scalable API endpoints if client-side/serverless is not feasible.
- Self-hosted LLMs (via Ollama on Elest.io): For running open-source LLMs with more control and potentially lower cost for high usage, managed by Elest.io.
- Multi-LLM API Gateway:OpenRouter.ai
- Why: Simplifies access to a vast array of LLMs (GPT-4, Claude, Mixtral, Llama, etc.) through a single API. Enables easy model switching for experimentation, cost optimization, and access to diverse capabilities without multiple direct integrations.
- Workflow Automation & No-Code/Low-Code Backend:n8n (self-hosted on Elest.io)
- Why: Powerful visual workflow automation. Connects various services, APIs, and AI models. Self-hosting via Elest.io makes it cost-effective and gives data control. Ideal for backend logic, AI task orchestration, and integrating third-party services without extensive custom backend code.
- Infrastructure & Hosting for Backend Services:Elest.io
- Why: Simplifies deployment and management of self-hosted open-source software like n8n, Supabase (for database/auth), and AI tools (Ollama, Dify, FlowiseAI). Reduces DevOps overhead significantly for Indie Hackers.
- Deployment:
- Next.js Frontend: Vercel (seamless integration, CI/CD, global CDN).
- Self-Hosted Services (n8n, Supabase, Ollama, etc.): Elest.io, deploying to a chosen cloud provider (e.g., Hetzner, DigitalOcean for budget-friendliness).
- Optional – Database & Authentication:Supabase (self-hosted on Elest.io or Supabase Cloud)
- Why: Open-source Firebase alternative. Provides PostgreSQL database, authentication, real-time subscriptions, and storage. Elest.io simplifies self-hosting Supabase. Alternatively, Supabase Cloud offers a generous free tier.
Step-by-Step Integration Guide (Next.js Focus)
This tutorial will guide Indie Hackers and solo developers through setting up and integrating these components.
✅ 1. Setting up a Next.js Project
We will use create-next-app
with the App Router (default in Next.js 14+).
Prerequisites:
- Node.js (latest LTS version recommended)
- npm or yarn or pnpm
Steps:
- Create a new Next.js application: Open your terminal and run:bash
npx create-next-app@latest my-ai-app
During the setup, you will be asked a few questions. For this guide, you can choose the following (or adjust to your preferences):- Would you like to use TypeScript? Yes (Recommended for larger projects)
- Would you like to use ESLint? Yes
- Would you like to use Tailwind CSS? Yes (Popular choice for styling)
- Would you like to use
src/
directory? Yes - Would you like to use App Router? Yes (Recommended)
- Would you like to customize the default import alias? No (or configure if you prefer)
- Navigate to your project directory:bash
cd my-ai-app
- Run the development server:bash
npm run dev # or # yarn dev # or # pnpm dev
Open http://localhost:3000 in your browser to see the default Next.js welcome page. - Project Structure (App Router with
src/
directory): Yoursrc/
directory will look something like this:my-ai-app/ ├── public/ ├── src/ │ ├── app/ │ │ ├── globals.css │ │ ├── layout.tsx │ │ └── page.tsx │ └── ... (other components, lib, etc.) ├── next.config.js ├── package.json ├── tsconfig.json └── ... (other configuration files)
src/app/page.tsx
: This is your main homepage.src/app/layout.tsx
: This is the root layout for your application.- API routes will be created as
route.ts
(orroute.js
) files within folders undersrc/app/api/
.
This completes the initial setup of your Next.js project. In the following sections, we will integrate the AI tools and backend services.
(Next sections will cover Hugging Face, OpenRouter, n8n, and Deployment)
✅ 2. Installing and Using Hugging Face APIs or Locally Hosted Models with Next.js
There are several ways to integrate Hugging Face models with a Next.js application:
Option A: Using transformers.js
for In-Browser/Edge Inference
This is great for lightweight NLP tasks, keeping inference client-side or on Vercel Edge Functions, reducing server load and costs.
- Install
transformers.js
:bashnpm install @xenova/transformers
- Create a Reusable Hook or Utility Function (Example: Sentiment Analysis): Create
src/lib/sentiment.ts
:typescript// src/lib/sentiment.ts import { pipeline, Pipeline } from '@xenova/transformers'; // Singleton pattern to load the pipeline only once class SentimentPipeline { static task = 'sentiment-analysis'; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english'; static instance: Pipeline | null = null; static async getInstance(progress_callback?: Function) { if (this.instance === null) { this.instance = await pipeline(this.task, this.model, { progress_callback }); } return this.instance; } } export const analyseSentiment = async (text: string, progress_callback?: Function) => { const predictor = await SentimentPipeline.getInstance(progress_callback); const result = await predictor(text); return result; };
- Use in a Client Component or API Route: Example: Client Component
src/app/sentiment-checker.tsx
:tsx// src/app/sentiment-checker.tsx 'use client'; import { useState, useEffect } from 'react'; import { analyseSentiment } from '@/lib/sentiment'; // Adjust path if needed export default function SentimentChecker() { const [text, setText] = useState(''); const [sentiment, setSentiment] = useState<any>(null); const [loading, setLoading] = useState(false); const [progress, setProgress] = useState(0); const handleAnalyse = async () => { if (!text.trim()) return; setLoading(true); setSentiment(null); setProgress(0); try { const result = await analyseSentiment(text, (p: any) => { setProgress(p.progress); }); setSentiment(result); } catch (error) { console.error('Error analysing sentiment:', error); setSentiment({ error: 'Failed to analyse sentiment' }); } setLoading(false); }; return ( <div className="p-4"> <h2 className="text-xl font-semibold mb-2">Sentiment Analyser (Client-Side)</h2> <textarea className="w-full p-2 border rounded mb-2 text-black" rows={4} value={text} onChange={(e) => setText(e.target.value)} placeholder="Enter text to analyse..." /> <button onClick={handleAnalyse} disabled={loading} className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded disabled:opacity-50" > {loading ? `Analysing... (${progress.toFixed(0)}%)` : 'Analyse Sentiment'} </button> {sentiment && ( <div className="mt-4 p-2 border rounded bg-gray-100 text-black"> <pre>{JSON.stringify(sentiment, null, 2)}</pre> </div> )} </div> ); }
Add this component to yoursrc/app/page.tsx
to test it.
Option B: Using Hugging Face Inference API (Server-Side in API Route)
This is suitable for models not yet available in transformers.js
or when you prefer server-side processing.
- Get your Hugging Face API Token: Go to your Hugging Face account settings -> Access Tokens -> New token (give it
read
role). - Store the token in
.env.local
: Create a.env.local
file in your project root:HF_API_TOKEN=your_hugging_face_api_token
- Create a Next.js API Route: Create
src/app/api/hf-inference/route.ts
:typescript// src/app/api/hf-inference/route.ts import { NextResponse } from 'next/server'; const API_URL = "https://api-inference.huggingface.co/models/"; export async function POST(request: Request) { const { model, inputs } = await request.json(); if (!model || !inputs) { return NextResponse.json({ error: 'Model and inputs are required' }, { status: 400 }); } if (!process.env.HF_API_TOKEN) { return NextResponse.json({ error: 'Hugging Face API token not configured' }, { status: 500 }); } try { const response = await fetch(`${API_URL}${model}`, { headers: { "Authorization": `Bearer ${process.env.HF_API_TOKEN}`, "Content-Type": "application/json", }, method: "POST", body: JSON.stringify({ inputs }), } ); if (!response.ok) { const errorText = await response.text(); console.error(`HF API Error (${response.status}): ${errorText}`); return NextResponse.json({ error: `Hugging Face API error: ${errorText}` }, { status: response.status }); } const result = await response.json(); return NextResponse.json(result); } catch (error: any) { console.error('Error calling Hugging Face Inference API:', error); return NextResponse.json({ error: 'Failed to call Hugging Face Inference API', details: error.message }, { status: 500 }); } }
- Call from Frontend:tsx
// In a client component async function callHfInference(model: string, inputText: string) { const response = await fetch('/api/hf-inference', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: model, inputs: inputText }), }); if (!response.ok) { const err = await response.json(); throw new Error(err.error || 'Failed to fetch from HF Inference API'); } return response.json(); } // Example usage: // callHfInference('distilbert-base-uncased-finetuned-sst-2-english', 'I love Next.js!') // .then(data => console.log(data)) // .catch(error => console.error(error));
Option C: Self-Hosted Model (e.g., Ollama on Elest.io)
If you deploy a model server like Ollama on Elest.io, it will provide an API endpoint (e.g., http://your-ollama-instance.elest.io:11434/api/generate
) .
- Deploy Ollama on Elest.io: Follow Elest.io’s instructions to deploy an Ollama instance. Note its URL and port.
- Create a Next.js API Route to proxy/call Ollama: Create
src/app/api/ollama-proxy/route.ts
:typescript// src/app/api/ollama-proxy/route.ts import { NextResponse } from 'next/server'; const OLLAMA_API_URL = process.env.OLLAMA_API_URL; // e.g., http://your-ollama.elest.io:11434/api/generate export async function POST(request: Request) { if (!OLLAMA_API_URL) { return NextResponse.json({ error: 'Ollama API URL not configured' }, { status: 500 }); } try { const { model, prompt, stream } = await request.json(); if (!model || !prompt) { return NextResponse.json({ error: 'Model and prompt are required' }, { status: 400 }); } const response = await fetch(OLLAMA_API_URL, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model, prompt, stream: stream || false }), }); if (!response.ok) { const errorText = await response.text(); return NextResponse.json({ error: `Ollama API error: ${errorText}` }, { status: response.status }); } // For non-streaming, directly return JSON if (stream === false || typeof stream === 'undefined') { const result = await response.json(); return NextResponse.json(result); } // For streaming, return the response stream directly // Note: Vercel Hobby plan might have issues with long-running streaming responses. // Consider Vercel Pro or alternative hosting for robust streaming. return new Response(response.body, { headers: { 'Content-Type': 'application/x-ndjson' }, // Or appropriate stream type }); } catch (error: any) { console.error('Error calling Ollama API:', error); return NextResponse.json({ error: 'Failed to call Ollama API', details: error.message }, { status: 500 }); } }
AddOLLAMA_API_URL
to your.env.local
. - Call from Frontend: Similar to the HF Inference API example, but point to
/api/ollama-proxy
.
✅ 3. Connecting to OpenRouter API via Environment Variables and Async Functions
OpenRouter allows you to use various LLMs through a single API, compatible with the OpenAI SDK.
- Install OpenAI SDK:bash
npm install openai
- Get your OpenRouter API Key: Sign up at OpenRouter.ai and get your API key.
- Store the key in
.env.local
:OPENROUTER_API_KEY=your_openrouter_api_key # Optional: For OpenRouter analytics/ranking YOUR_SITE_URL=http://localhost:3000 # Or your deployed site URL YOUR_APP_NAME="My AI App"
- Create a Next.js API Route for OpenRouter: Create
src/app/api/openrouter/route.ts
:typescript// src/app/api/openrouter/route.ts import { NextResponse } from 'next/server'; import OpenAI from 'openai'; const openrouterApiKey = process.env.OPENROUTER_API_KEY; const siteUrl = process.env.YOUR_SITE_URL; const appName = process.env.YOUR_APP_NAME; if (!openrouterApiKey) { console.error('OpenRouter API key not found. Please set OPENROUTER_API_KEY in .env.local'); } const client = new OpenAI({ baseURL: "https://openrouter.ai/api/v1", apiKey: openrouterApiKey, defaultHeaders: { "HTTP-Referer": siteUrl || '', "X-Title": appName || '', }, }) ; export async function POST(request: Request) { if (!openrouterApiKey) { return NextResponse.json({ error: 'OpenRouter API key not configured' }, { status: 500 }); } try { const { model, messages, stream } = await request.json(); if (!model || !messages) { return NextResponse.json({ error: 'Model and messages are required' }, { status: 400 }); } const completion = await client.chat.completions.create({ model: model, // e.g., "mistralai/mistral-7b-instruct", "openai/gpt-3.5-turbo", "anthropic/claude-3-haiku-20240307" messages: messages, // Array of message objects: [{ role: "user", content: "Hello!" }] stream: stream || false, }); if (stream) { // Logic to handle streaming response if OpenAI SDK supports it directly for Next.js Edge/Node runtime // For now, let's assume non-streaming for simplicity in this example or adapt based on OpenAI SDK v4+ streaming with Next.js // This might require returning a ReadableStream. // The following is a placeholder for actual stream handling. // If using Vercel AI SDK, it simplifies streaming a lot. return NextResponse.json({ error: "Streaming not fully implemented in this basic example for OpenRouter, use Vercel AI SDK or handle ReadableStream." }, { status: 501 }); } else { return NextResponse.json(completion); } } catch (error: any) { console.error('OpenRouter API error:', error); return NextResponse.json({ error: 'Failed to fetch from OpenRouter API', details: error.message }, { status: 500 }); } }
Note on Streaming: True streaming with theopenai
package in Next.js API routes requires careful handling ofReadableStream
. The Vercel AI SDK (ai
package) greatly simplifies this. For this basic example, streaming is stubbed. - Call from Frontend:tsx
// In a client component async function callOpenRouter(model: string, userMessages: Array<{role: string, content: string}>) { const response = await fetch('/api/openrouter', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: model, messages: userMessages }), }); if (!response.ok) { const err = await response.json(); throw new Error(err.error || 'Failed to fetch from OpenRouter'); } return response.json(); } // Example usage: // callOpenRouter('mistralai/mistral-7b-instruct', [{role: 'user', content: 'Explain quantum computing in simple terms.'}]) // .then(data => console.log(data.choices[0].message.content)) // .catch(error => console.error(error));
✅ 4. Creating Webhooks with n8n and Triggering Flows from Frontend
Assumptions: You have an n8n instance running (e.g., self-hosted via Elest.io or n8n Cloud).
Step 1: Create an n8n Workflow with a Webhook Trigger
- Open your n8n instance.
- Create a new workflow.
- Add a Webhook node as the trigger.
- It will automatically generate a Test URL and a Production URL.
- Set HTTP Method to
POST
. - For testing, you can use the Test URL. For production, activate the workflow and use the Production URL.
- You can define a path for the webhook if desired (e.g.,
/my-ai-task
).
- Add other nodes to your workflow. For example, an OpenAI node (configured with your OpenRouter
baseURL
and API key, or your direct OpenAI key) to process data received from the webhook, or a Google Sheets node to save data.- Example: Webhook -> Set Node (to extract data) -> OpenAI (Chat Completion) -> Respond to Webhook.
- Respond to Webhook (Optional but often needed):
- Add a Respond to Webhook node at the end of your workflow if you want to send a response back to the Next.js app synchronously.
- Configure it to send back the data you want (e.g., the output from the OpenAI node).
- Save and Activate your n8n workflow. Copy the Production Webhook URL.
Step 2: Store n8n Webhook URL in Next.js Environment Variables
Add to .env.local
:
N8N_WEBHOOK_URL_MY_TASK=your_n8n_production_webhook_url
Step 3: Create a Next.js API Route to Trigger the n8n Workflow
This API route acts as a proxy or a structured way to call your n8n webhook, allowing you to handle authentication or data transformation within your Next.js app if needed before hitting n8n.
Create src/app/api/trigger-n8n/route.ts
:
typescript
// src/app/api/trigger-n8n/route.ts
import { NextResponse } from 'next/server';
const n8nWebhookUrl = process.env.N8N_WEBHOOK_URL_MY_TASK;
export async function POST(request: Request) {
if (!n8nWebhookUrl) {
return NextResponse.json({ error: 'n8n webhook URL not configured' }, { status: 500 });
}
try {
const body = await request.json(); // Data to send to n8n
const response = await fetch(n8nWebhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
});
if (!response.ok) {
const errorText = await response.text();
return NextResponse.json({ error: `n8n webhook error: ${errorText}` }, { status: response.status });
}
// If n8n's "Respond to Webhook" node sends data back, parse it
const result = await response.json();
return NextResponse.json(result);
} catch (error: any) {
console.error('Error calling n8n webhook:', error);
return NextResponse.json({ error: 'Failed to call n8n webhook', details: error.message }, { status: 500 });
}
}
Step 4: Call from Frontend
tsx
// In a client component
async function triggerN8nWorkflow(payload: any) {
const response = await fetch('/api/trigger-n8n', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
if (!response.ok) {
const err = await response.json();
throw new Error(err.error || 'Failed to trigger n8n workflow');
}
return response.json(); // Response from n8n's "Respond to Webhook" node
}
// Example usage:
// triggerN8nWorkflow({ userInput: 'Summarize this for me', textToSummarize: 'Long text here...' })
// .then(data => console.log('n8n response:', data))
// .catch(error => console.error(error));
This setup allows your Next.js frontend to trigger complex backend automations managed by n8n, which can then interact with various AI services, databases, and third-party APIs.
✅ 5. Deploying the Stack (Next.js + Hosted Services)
A. Deploying Next.js Frontend to Vercel
Vercel is the creator of Next.js and offers seamless deployment.
- Push your Next.js project to a Git provider: (GitHub, GitLab, Bitbucket)
- Initialize a git repository in your
my-ai-app
folder if you haven’t already:bashgit init git add . git commit -m "Initial commit"
- Create a new repository on GitHub (or your preferred provider) and push your local repository.
- Initialize a git repository in your
- Sign up/Log in to Vercel: Go to vercel.com and sign up with your Git provider account.
- Import your Project:
- From your Vercel dashboard, click “Add New…” -> “Project”.
- Import your Git repository.
- Vercel will automatically detect it as a Next.js project and configure build settings (usually no changes needed for a standard Next.js app).
- Configure Environment Variables:
- In your Vercel project settings, go to “Environment Variables”.
- Add all the environment variables you defined in your
.env.local
file:HF_API_TOKEN
(if using Hugging Face Inference API directly)OLLAMA_API_URL
(if using self-hosted Ollama via Elest.io – this will be the public URL Elest.io provides for your Ollama instance)OPENROUTER_API_KEY
YOUR_SITE_URL
(this should be your Vercel production URL once deployed, or a custom domain)YOUR_APP_NAME
N8N_WEBHOOK_URL_MY_TASK
(this will be the public URL Elest.io provides for your n8n webhook endpoint)SUPABASE_URL
(if using Supabase – see next section)SUPABASE_ANON_KEY
(if using Supabase – see next section)
- Ensure these are set for Production, Preview, and Development environments as needed.
- Deploy: Click the “Deploy” button. Vercel will build and deploy your application. You will get a production URL (e.g.,
my-ai-app.vercel.app
).
B. Deploying Backend Services (n8n, Ollama, Supabase) on Elest.io
- Sign up/Log in to Elest.io: Go to elest.io .
- Deploy n8n:
- From the Elest.io dashboard, click “Create Service”.
- Search for “n8n” in the catalog.
- Select your preferred cloud provider, region, and service plan (VM size).
- Follow the prompts to deploy. Elest.io will provide you with the URL for your n8n instance.
- Once deployed, access your n8n instance, create your webhook workflow (as described in section 4), and get the Production Webhook URL. Use this URL for
N8N_WEBHOOK_URL_MY_TASK
in your Vercel environment variables.
- Deploy Ollama (Optional, for self-hosted LLMs):
- In Elest.io, search for “Ollama”.
- Deploy it similarly to n8n.
- Elest.io will provide a URL and port for your Ollama instance (e.g.,
http://your-ollama-instance.elest.io:11434
) . The API endpoint is typically/api/generate
or/api/chat
. - Use this full API endpoint URL for
OLLAMA_API_URL
in your Vercel environment variables. - You might need to configure Ollama after deployment to pull specific models (e.g.,
ollama pull llama3
). Elest.io might provide SSH access or a terminal to manage the VM if needed.
- Deploy Supabase (Optional, for Database/Auth – see next section):
- If you choose to self-host Supabase, deploy it via Elest.io. It will provide you with the Supabase URL and
anon
key.
- If you choose to self-host Supabase, deploy it via Elest.io. It will provide you with the Supabase URL and
Important Considerations for Elest.io:
- Security: While Elest.io manages updates, ensure your services are configured securely (e.g., strong passwords for n8n, API keys for Ollama if applicable).
- Domains: Elest.io allows you to connect custom domains to your deployed services.
- Backups & Monitoring: Familiarize yourself with Elest.io’s backup and monitoring features for your services.
✅ 6. Optional: Using Supabase for Authentication and Database
Supabase provides a PostgreSQL database, authentication, storage, and real-time capabilities. You can use Supabase Cloud (generous free tier) or self-host it via Elest.io.
Option A: Supabase Cloud (Recommended for starting)
- Create a Supabase Project: Go to supabase.com , sign up, and create a new project.
- Get API URL and Anon Key: In your Supabase project dashboard, go to Project Settings -> API. You will find your Project URL and the
anon
(public) key. - Store in Vercel Environment Variables: Add these to your Vercel project environment variables:
NEXT_PUBLIC_SUPABASE_URL=your_supabase_project_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
(UsingNEXT_PUBLIC_
prefix makes them available in client-side browser code)
Option B: Self-Hosted Supabase via Elest.io
- Deploy Supabase on Elest.io:
- Search for “Supabase” in the Elest.io catalog and deploy it.
- Elest.io will provide the Supabase URL and
anon
key for your instance.
- Store in Vercel Environment Variables: As above.
Integrating Supabase with Next.js (App Router)
- Install Supabase Client Libraries:bash
npm install @supabase/supabase-js @supabase/auth-helpers-nextjs
- Create Supabase Client Utility: Create
src/lib/supabase/client.ts
(for client components):typescript// src/lib/supabase/client.ts import { createBrowserClient } from '@supabase/ssr'; export function createClient() { return createBrowserClient( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY! ); }
Createsrc/lib/supabase/server.ts
(for server components and API routes):typescript// src/lib/supabase/server.ts import { createServerClient, type CookieOptions } from '@supabase/ssr'; import { cookies } from 'next/headers'; export function createClient() { const cookieStore = cookies(); return createServerClient( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, { cookies: { get(name: string) { return cookieStore.get(name)?.value; }, set(name: string, value: string, options: CookieOptions) { try { cookieStore.set({ name, value, ...options }); } catch (error) { // The `set` method was called from a Server Component. // This can be ignored if you have middleware refreshing // user sessions. } }, remove(name: string, options: CookieOptions) { try { cookieStore.set({ name, value: '', ...options }); } catch (error) { // The `delete` method was called from a Server Component. } }, }, } ); }
- Set up Auth Helper Middleware: Create
src/middleware.ts
:typescript// src/middleware.ts import { createServerClient, type CookieOptions } from '@supabase/ssr'; import { NextResponse, type NextRequest } from 'next/server'; export async function middleware(request: NextRequest) { let response = NextResponse.next({ request: { headers: request.headers, }, }); const supabase = createServerClient( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, { cookies: { get(name: string) { return request.cookies.get(name)?.value; }, set(name: string, value: string, options: CookieOptions) { request.cookies.set({ name, value, ...options, }); response = NextResponse.next({ request: { headers: request.headers, }, }); response.cookies.set({ name, value, ...options, }); }, remove(name: string, options: CookieOptions) { request.cookies.set({ name, value: '', ...options, }); response = NextResponse.next({ request: { headers: request.headers, }, }); response.cookies.set({ name, value: '', ...options, }); }, }, } ); // Refresh session if expired - important to keep user logged in await supabase.auth.getSession(); return response; } export const config = { matcher: [ /* * Match all request paths except for the ones starting with: * - _next/static (static files) * - _next/image (image optimization files) * - favicon.ico (favicon file) * Feel free to modify this pattern to include more paths. */ '/((?!_next/static|_next/image|favicon.ico|.*\.(?:svg|png|jpg|jpeg|gif|webp)$).*)', ], };
- Example: Sign-up/Login Component (Client Component): You can now build UI components for authentication using
supabase.auth.signInWithPassword()
,supabase.auth.signUp()
,supabase.auth.signOut()
, etc. Refer to the official Supabase documentation for UI examples for Next.js App Router. - Example: Fetching Data in a Server Component:tsx
// src/app/some-page/page.tsx import { createClient } from '@/lib/supabase/server'; import { cookies } from 'next/headers'; export default async function SomePage() { const cookieStore = cookies(); const supabase = createClient(cookieStore); const { data: items, error } = await supabase.from('your_table_name').select('*'); if (error) console.error('Error fetching items:', error); return ( <div> <h1>Items from Supabase</h1> {items && <pre>{JSON.stringify(items, null, 2)}</pre>} </div> ); }
This completes the core integration steps. The next section will provide a mini-product example to show how these components interact.
✅ 7. Example Use Case: AI Auto-Summary and Email Digest Generator
This mini-product example demonstrates how all the components of the proposed stack can interact to create an “AI Auto-Summary and Email Digest Generator.” Users can submit articles (via URL or direct text), have them summarized, and receive periodic email digests of their summarized content.
Core Functionality:
- Users can sign up and log in (Supabase).
- Users can submit URLs or text content for summarization (Next.js frontend).
- Submitted content is processed, summarized using an LLM (via OpenRouter/Ollama, orchestrated by n8n), and stored (Supabase).
- Users receive scheduled email digests of their new summaries (n8n).
Stack Interaction:
- Next.js Frontend (Vercel):
- UI for user registration, login, content submission (URL/text), viewing summaries, and managing digest preferences.
- Calls Next.js API routes for backend interactions.
- Supabase (Hosted on Elest.io or Supabase Cloud):
users
table (built-in with Supabase Auth).user_preferences
table:user_id
,digest_frequency
(daily, weekly),email_address
.content_submissions
table:id
,user_id
,type
(url, text),original_content_url
,original_text
,created_at
.summaries
table:id
,submission_id
,summary_text
,model_used
,generated_at
.
- n8n (Hosted on Elest.io):
- Workflow 1: Content Ingestion & Summarization (Webhook Triggered)
- Webhook Node: Receives
userId
,contentType
(url
ortext
), andcontentValue
from Next.js API route. - IF Node: Checks
contentType
. - (If URL) HTTP Request Node: Fetches content from
contentValue
(URL). - HTML Extract Node (Optional): Extracts main article text from fetched HTML.
- Set Node: Prepares the text for summarization.
- OpenRouter/OpenAI Node (or HTTP Request to Ollama): Sends text to an LLM (e.g.,
mistralai/mistral-7b-instruct
via OpenRouter, or a self-hosted model) for summarization. Prompt engineered for concise summaries. - Supabase Node (Insert): Saves the original submission to
content_submissions
table. - Supabase Node (Insert): Saves the generated summary to
summaries
table, linking to the submission. - Respond to Webhook Node: Sends a success/failure message back to Next.js.
- Webhook Node: Receives
- Workflow 2: Daily/Weekly Email Digest (Scheduled Trigger)
- Schedule Trigger Node: Runs daily or weekly.
- Supabase Node (Select): Fetches users who are due for a digest based on
user_preferences.digest_frequency
. - Loop Over Items Node: Iterates through each user.
- Supabase Node (Select): Fetches new summaries for the current user since their last digest (requires tracking last digest date or new summaries).
- IF Node: Checks if there are new summaries.
- Set Node / Function Node: Formats the email content with the summaries.
- Email Node (e.g., SendGrid, SMTP): Sends the digest email to
user_preferences.email_address
. - (Optional) Supabase Node (Update): Updates a
last_digest_sent_at
timestamp for the user.
- Workflow 1: Content Ingestion & Summarization (Webhook Triggered)
- OpenRouter.ai (or Self-Hosted Ollama on Elest.io):
- Provides the LLM for the summarization task, called by the n8n workflow.
- Elest.io:
- Hosts the n8n instance.
- Hosts Supabase (if self-hosting).
- Hosts Ollama (if self-hosting a specific summarization model).
High-Level Steps & Illustrative Snippets:
1. Next.js Frontend: Submitting Content Create a form in a client component (src/app/submit-content.tsx
):
tsx
// src/app/submit-content.tsx (Simplified Example)
"use client";
import { useState } from "react";
import { createClient } from "@/lib/supabase/client"; // Your Supabase client
export default function SubmitContentForm() {
const [contentType, setContentType] = useState("url");
const [contentValue, setContentValue] = useState("");
const [message, setMessage] = useState("");
const supabase = createClient();
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setMessage("Submitting...");
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
setMessage("Please log in to submit content.");
return;
}
try {
const response = await fetch("/api/submit-for-summary", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ userId: user.id, contentType, contentValue }),
});
const result = await response.json();
if (response.ok) {
setMessage(`Submission successful! ${result.message || ""}`);
setContentValue("");
} else {
setMessage(`Error: ${result.error || "Failed to submit"}`);
}
} catch (error) {
setMessage("Client-side error during submission.");
console.error(error);
}
};
// ... JSX for the form ...
return (
<form onSubmit={handleSubmit} className="space-y-4 p-4 bg-white shadow-md rounded-lg">
<div>
<label htmlFor="contentType" className="block text-sm font-medium text-gray-700">Content Type:</label>
<select id="contentType" value={contentType} onChange={(e) => setContentType(e.target.value)} className="mt-1 block w-full pl-3 pr-10 py-2 text-base border-gray-300 focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm rounded-md text-black">
<option value="url">URL</option>
<option value="text">Text</option>
</select>
</div>
<div>
<label htmlFor="contentValue" className="block text-sm font-medium text-gray-700">
{contentType === "url" ? "Article URL" : "Text to Summarize"}
</label>
{contentType === "url" ? (
<input type="url" id="contentValue" value={contentValue} onChange={(e) => setContentValue(e.target.value)} required className="mt-1 focus:ring-indigo-500 focus:border-indigo-500 block w-full shadow-sm sm:text-sm border-gray-300 rounded-md p-2 text-black" placeholder="https://example.com/article" />
) : (
<textarea id="contentValue" value={contentValue} onChange={(e) => setContentValue(e.target.value)} required rows={6} className="mt-1 focus:ring-indigo-500 focus:border-indigo-500 block w-full shadow-sm sm:text-sm border-gray-300 rounded-md p-2 text-black" placeholder="Paste your text here..."></textarea>
)}
</div>
<div>
<button type="submit" className="w-full flex justify-center py-2 px-4 border border-transparent rounded-md shadow-sm text-sm font-medium text-white bg-indigo-600 hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500">
Submit for Summary
</button>
</div>
{message && <p className="text-sm text-gray-600">{message}</p>}
</form>
);
}
2. Next.js API Route: /api/submit-for-summary/route.ts
This route forwards the request to the n8n webhook.
typescript
// src/app/api/submit-for-summary/route.ts
import { NextResponse } from "next/server";
import { createClient } from "@/lib/supabase/server"; // Server client for auth check
import { cookies } from "next/headers";
const N8N_SUMMARIZATION_WEBHOOK_URL = process.env.N8N_SUMMARIZATION_WEBHOOK_URL;
export async function POST(request: Request) {
const cookieStore = cookies();
const supabase = createClient(cookieStore);
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
if (!N8N_SUMMARIZATION_WEBHOOK_URL) {
return NextResponse.json({ error: "Summarization service not configured" }, { status: 500 });
}
try {
const { contentType, contentValue } = await request.json();
if (!contentType || !contentValue) {
return NextResponse.json({ error: "Missing contentType or contentValue" }, { status: 400 });
}
// Forward to n8n webhook
const n8nResponse = await fetch(N8N_SUMMARIZATION_WEBHOOK_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ userId: user.id, contentType, contentValue }),
});
if (!n8nResponse.ok) {
const errorText = await n8nResponse.text();
console.error("n8n error:", errorText);
return NextResponse.json({ error: "Summarization request failed", details: errorText }, { status: n8nResponse.status });
}
const result = await n8nResponse.json();
return NextResponse.json(result);
} catch (error: any) {
console.error("API route error:", error);
return NextResponse.json({ error: "Internal server error", details: error.message }, { status: 500 });
}
}
Remember to add N8N_SUMMARIZATION_WEBHOOK_URL
to your .env.local
and Vercel environment variables.
3. n8n Workflow: Content Ingestion & Summarization
- Trigger: Webhook node (Path: e.g.,
/summarize-content
, Method: POST). - Node 2 (Set): Extract
userId
,contentType
,contentValue
from webhook body.userId
:{{ $json.body.userId }}
contentType
:{{ $json.body.contentType }}
contentValue
:{{ $json.body.contentValue }}
- Node 3 (IF): Condition:
{{ $json.contentType === "url" }}
- Node 4 (HTTP Request – True branch of IF):
- URL:
{{ $json.contentValue }}
- Method: GET
- Response Format: HTML
- URL:
- Node 5 (HTML Extract – True branch of IF):
- Source Data: Output of HTTP Request node.
- Extraction Values: Define CSS selectors to get main article text (e.g.,
article
,p
,.post-content p
). This can be tricky and site-dependent. - Output:
extractedText
- Node 6 (Merge/Set): Prepare
textToSummarize
.- If URL path:
{{ $node["HTML Extract"].json.extractedText }}
- If Text path:
{{ $json.contentValue }}
- If URL path:
- Node 7 (OpenAI/OpenRouter – Chat Completion):
- Authentication: Your API Key (OpenRouter or OpenAI).
- Model: e.g.,
mistralai/mistral-7b-instruct
(for OpenRouter) orgpt-3.5-turbo
. - Messages (Prompt):
[ { "role": "system", "content": "You are an expert summarizer. Provide a concise summary of the following text in 2-3 sentences." }, { "role": "user", "content": "{{ $node[\"Merge/Set\"].json.textToSummarize }}" } ]
- Node 8 (Supabase Insert –
content_submissions
):- Table:
content_submissions
- Columns:
user_id
({{ $json.userId }}
),type
({{ $json.contentType }}
),original_content_url
(if URL),original_text
(if text).
- Table:
- Node 9 (Supabase Insert –
summaries
):- Table:
summaries
- Columns:
submission_id
(output from Node 8),summary_text
({{ $node[\"OpenAI/OpenRouter - Chat Completion\"].json.choices[0].message.content }}
),model_used
(e.g., “mistral-7b-instruct”).
- Table:
- Node 10 (Respond to Webhook):
- Body:
{{ { "message": "Summary processing started.", "summaryId": $node[\"Supabase Insert - summaries\"].json[0].id } }}
- Body:
4. n8n Workflow: Email Digest (Scheduled) This workflow is more complex involving loops and fetching user-specific data. The key nodes would be:
- Schedule Trigger: Daily/Weekly.
- Supabase Select: Get users due for digest.
- Loop Over Items: For each user.
- Supabase Select: Get new summaries for that user.
- Set/Function: Format email HTML.
- Email Node: Send email.
This example illustrates the end-to-end flow, showcasing how each component plays its part. Indie Hackers can adapt and expand upon this foundation to build more sophisticated AI-powered applications.
Hugging Face Transformers.js with Next.js Integration
Source: https://huggingface.co/docs/transformers.js/en/tutorials/next
Building a Next.js application
In this tutorial, we’ll build a simple Next.js application that performs sentiment analysis using Transformers.js! Since Transformers.js can run in the browser or in Node.js, you can choose whether you want to perform inference client-side or server-side (we’ll show you how to do both) . In either case, we will be developing with the new App Router paradigm. The final product will look something like this:
Useful links:
- Demo site: client-side or server-side (links were relative on the page)
- Source code: client-side or server-side (links were relative on the page)
Prerequisites
- Node.js version 18+
- npm version 9+
Client-side inference
Step 1: Initialise the project
Start by creating a new Next.js application using create-next-app
:
bash
npx create-next-app@latest
On installation, you’ll see various prompts. For this demo, we’ll be selecting those shown below in bold:
√ What is your project named? ... next
√ Would you like to use TypeScript? ... No / Yes
√ Would you like to use ESLint? ... No / Yes
√ Would you like to use Tailwind CSS? ... No / Yes
√ Would you like to use `src/` directory? ... No / Yes
√ Would you like to use App Router? (recommended) ... No / Yes
√ Would you like to customize the default import alias? ... No / Yes
Step 2: Install and configure Transformers.js
You can install Transformers.js from NPM with the following command:
bash
npm i @huggingface/transformers
We also need to update the next.config.js
file to ignore node-specific modules when bundling for the browser:
javascript
/** @type {import('next').NextConfig} */
const nextConfig = {
// (Optional) Export as a static site
// See https://nextjs.org/docs/pages/building-your-application/deploying/static-exports#configuration
output: 'export', // Feel free to modify/remove this option
// Override the default webpack configuration
webpack: (config) => {
// See https://webpack.js.org/configuration/resolve/#resolvealias
config.resolve.alias = {
...config.resolve.alias,
"sharp$": false,
"onnxruntime-node$": false,
}
return config;
},
}
module.exports = nextConfig
Next, we’ll create a new Web Worker script where we’ll place all ML-related code. This is to ensure that the main thread is not blocked while the model is loading and performing inference. For this application, we’ll be using Xenova/distilbert-base-uncased-finetuned-sst-2-english
, a ~67M parameter model finetuned on the Stanford Sentiment Treebank dataset. Add the following code to ./src/app/worker.js
:
javascript
import { pipeline, env } from "@huggingface/transformers";
// Skip local model check
env.allowLocalModels = false;
// Use the Singleton pattern to enable lazy construction of the pipeline.
class PipelineSingleton {
static task = 'text-classification';
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
this.instance = pipeline(this.task, this.model, { progress_callback });
}
return this.instance;
}
}
// Listen for messages from the main thread
self.addEventListener('message', async (event) => {
// Retrieve the classification pipeline. When called for the first time,
// this will load the pipeline and save it for future use.
let classifier = await PipelineSingleton.getInstance(x => {
// We also add a progress callback to the pipeline so that we can
// track model loading.
self.postMessage(x);
});
// Actually perform the classification
let output = await classifier(event.data.text);
// Send the output back to the main thread
self.postMessage({
status: 'complete',
output: output,
});
});
Step 3: Design the user interface
We’ll now modify the default ./src/app/page.js
file so that it connects to our worker thread. Since we’ll only be performing in-browser inference, we can opt-in to Client components using the 'use client'
directive.
javascript
'use client'
import { useState, useEffect, useRef, useCallback } from 'react'
export default function Home() {
/* TODO: Add state variables */
// Create a reference to the worker object.
const worker = useRef(null);
// We use the `useEffect` hook to set up the worker as soon as the `App` component is mounted.
useEffect(() => {
if (!worker.current) {
// Create the worker if it does not yet exist.
worker.current = new Worker(new URL('./worker.js', import.meta.url), {
type: 'module'
});
}
// Create a callback function for messages from the worker thread.
const onMessageReceived = (e) => { /* TODO: See below */ };
// Attach the callback function as an event listener.
worker.current.addEventListener('message', onMessageReceived);
// Define a cleanup function for when the component is unmounted.
return () => worker.current.removeEventListener('message', onMessageReceived);
});
const classify = useCallback((text) => {
if (worker.current) {
worker.current.postMessage({ text });
}
}, []);
return ( /* TODO: See below */ )
}
Initialise the following state variables at the beginning of the Home
component:
javascript
// Keep track of the classification result and the model loading status.
const [result, setResult] = useState(null);
const [ready, setReady] = useState(null);
and fill in the onMessageReceived
function to update these variables when the worker thread sends a message:
javascript
const onMessageReceived = (e) => {
switch (e.data.status) {
case 'initiate':
setReady(false);
break;
case 'ready':
setReady(true);
break;
case 'complete':
setResult(e.data.output[0])
break;
}
};
Finally, we can add a simple UI to the Home
component, consisting of an input textbox and a preformatted text element to display the classification result:
jsx
<main className="flex min-h-screen flex-col items-center justify-center p-12">
<h1 className="text-5xl font-bold mb-2 text-center">Transformers.js</h1>
<h2 className="text-2xl mb-4 text-center">Next.js template</h2>
<input
className="w-full max-w-xs p-2 border border-gray-300 rounded mb-4"
type="text"
placeholder="Enter text here"
onInput={e => {
classify(e.target.value);
}}
/>
{ready !== null && (
<pre className="bg-gray-100 p-2 rounded">
{ (!ready || !result) ? 'Loading...' : JSON.stringify(result, null, 2) }
</pre>
)}
</main>
(Optional) Step 4: Build and deploy
To build the application, run:
bash
npm run build
This will create a production-ready build in the ./out
directory (since we specified output: 'export'
in next.config.js
). You can then deploy this directory to any static hosting provider.
If you want to deploy to Hugging Face Spaces, you can follow these steps:
- If you haven’t already, you can create a free Hugging Face account here .
- Create a new Dockerfile in your project’s root folder. You can use our example Dockerfile as a template.
- Visit https://huggingface.co/new-space and fill in the form. Remember to select “Docker” as the space type (you can choose the “Blank” Docker template).
- Click the “Create space” button at the bottom of the page.
- Go to “Files” → “Add file” → “Upload files”. Drag the files from your project folder (excluding
node_modules
and.next
, if present) into the upload box and click “Upload”. After they have uploaded, scroll down to the button and click “Commit changes to main”. - Add the following lines to the top of your
README.md
:
markdown
---
title: Next Client Example App
emoji: 🔥
colorFrom: yellow
colorTo: red
sdk: docker # Important: select "docker" as the SDK
pinned: false
app_port: 3000 # Important: specify the port your app is running on
---
That’s it! Your application should now be live at https://huggingface.co/spaces/<your-username>/<your-space-name>
!
Server-side inference
(The tutorial continues with server-side inference steps, which are similar but involve API routes in Next.js and running the pipeline in the Node.js environment. The key difference is that the ML model runs on the server, not in the user’s browser.)
Key aspects for server-side:
- Create an API route (e.g.,
src/app/api/classify/route.js
). - Instantiate and use the pipeline within this API route.
- The
next.config.js
webpack modification forsharp
andonnxruntime-node
might not be needed or might be different for server-side only execution. - The frontend makes
fetch
requests to this API endpoint.