A Complete Migration Guide From WordPress to Next.js
September 5, 2025
TLDR:
I migrated carmelosantana.com from WordPress to Next.js, preserving all WordPress features while adding modern capabilities like llms.txt, automated deployments, and enhanced performance. This guide covers the complete process, from content export to production deployment.
After 15+ years of building WordPress sites, I made the decision to migrate my personal website from WordPress to Next.js. WordPress no longer serves me or my core client needs anymore. I find the ecosystem a bit bulky and I'm looking for a more streamlined, modern approach to web development.
This migration preserved every WordPress web feature users expect while adding capabilities that would have been complex to implement in WordPress.
Table of Contents
- Why Migrate from WordPress?
- Planning and Preparation
- Design Conversion with v0.dev
- Content Export and Migration
- Building the Next.js Foundation
- Preserving WordPress Features
- Adding Modern Capabilities
- Deployment and Automation
- SEO and Search Engine Considerations
- Results and Performance
- Lessons Learned
Why Migrate from WordPress?
The decision to move away from WordPress was simple. A desire to keep up with modern web development practices and improve my site's performance and maintainability.
Content as Code: Version-controlled markdown files instead of database content Performance: Static generation with dynamic capabilities when needed Developer Experience: Modern tooling AI Integration: Native support for llms.txt specification and markdown endpoints
Deployment Simplicity: Git based deployments with automatic builds
With a strong desire to remove SQL service dependencies, I was excited to return to a statically generated site.
Planning and Preparation
Content Audit and Export Strategy
Before touching any code, I conducted a comprehensive audit of my WordPress site:
- ~100 pages posts spanning 2008-2025
- Custom post types for projects and portfolio items
- Media library with hundreds of images
- SEO metadata and permalink structures
- Comments and user interactions
- Custom redirects and legacy URL handling
The key insight: I needed to preserve not just content, but the entire user experience and SEO value built over years.
Technology Stack Selection
Frontend: Next.js + Shadcn UI
Content: Markdown with gray-matter frontmatter
Styling: Tailwind CSS with custom components
Deployment: Vercel with automatic GitHub integration
Content Processing: Custom build-time generators
Design Conversion with v0.dev
One of the most challenging aspects of any migration is recreating the visual design and user experience. Rather than starting from scratch with design decisions, I used v0.dev (now v0.app) to quickly prototype and generate the initial Next.js components based on my existing WordPress site.
Initial v0.dev Prompt
I provided v0.dev with a comprehensive prompt that captured both the technical requirements and design philosophy:
Please help me convert my current website from wordpress. I want to make a new modern and sleek nextjs shadcn website, keeping a similar look and feel from my previous website. I have attached a large screenshot from desktop and another showing mobile responsiveness. The site should be easy to edit. I plan on adding more projects in the future. the projects, and any other section on the site that can be a defined list should be configured in json. this will make edits easier in the future. I want to incorporate new elements of my life, a new more humble tone. I also attached guidlines for how to speak my tone of voice. lastly i want to fully support light and dark mode. this should considered from the very begginging so this feature works as expected. we will add more pages, with varied styles in the future. I also included my resume which may help add a more personal touch. https://carmelosantana.com if you want to review the live site.
Key Design Requirements
Based on my WordPress site analysis, I specified several critical requirements:
Visual Consistency: Maintain the clean, professional aesthetic while modernizing the typography and spacing
Component Architecture: Build reusable components for projects, blog posts, and other content sections
Data-Driven Content: Configure projects and other lists in JSON for easy future updates
Accessibility: Full light/dark mode support built from the foundation
Mobile Responsiveness: Ensure the design works flawlessly across all device sizes
Personal Touch: Incorporate a more humble, authentic tone throughout the design
v0.dev Workflow
The v0.dev process involved several iterations:
Initial Generation: v0.dev created the base layout and component structure using shadcn/ui
Iterative Refinement: I provided feedback on spacing, typography, and component behavior
Component Extraction: Breaking down complex sections into reusable, maintainable components
Dark Mode Integration: Ensuring all components worked seamlessly in both light and dark themes
Generated Component Architecture
v0.dev helped establish a clean component structure that became the foundation for the entire site:
// Component structure generated by v0.dev
components/
├── ui/ // shadcn/ui base components
├── layout/
│ ├── header.tsx // Navigation with theme toggle
│ ├── footer.tsx // Contact links and social
│ └── sidebar.tsx // Mobile navigation
├── sections/
│ ├── hero.tsx // Homepage hero section
│ ├── about.tsx // About section with personal touch
│ ├── projects.tsx // Project grid with filtering
│ └── blog.tsx // Latest blog posts
└── content/
├── project-card.tsx
├── blog-card.tsx
└── cta-section.tsx
Data Configuration Strategy
Following the initial prompt's emphasis on easy editing, v0.dev helped structure the data layer:
// lib/data.ts - JSON-configurable content
export const projects = [
{
title: "Alpaca Bot",
description: "AI interface for automated trading systems",
technologies: ["Python", "FastAPI", "React"],
href: "/projects/alpaca-bot",
featured: true
},
// Additional projects...
];
export const services = [
{
title: "WordPress Architecture",
description: "Enterprise WordPress solutions and migrations",
icon: "Code",
href: "/services#wordpress"
},
// Additional services...
];
Design System Foundation
v0.dev established a consistent design system that carried through the entire migration:
Typography: Clean, readable fonts with proper hierarchy
Spacing: Consistent spacing scale using Tailwind's system
Colors: Professional color palette with full dark mode support
Components: Reusable UI components built on shadcn/ui
Animations: Subtle, purposeful animations that enhance UX
Benefits of the v0.dev Approach
Rapid Prototyping: Got a working design in hours instead of days
Modern Foundation: Built on current best practices and frameworks
Component Consistency: Established patterns that scaled across the site
Accessibility: Dark mode and responsive design built from the start
Maintainability: Clean, documented code that's easy to extend
The v0.dev-generated foundation provided an excellent starting point that accelerated the entire migration process. Rather than spending weeks on design decisions and component architecture, I could focus on content migration and feature development.
Content Export and Migration
WordPress Content Export
I used multiple export methods to ensure comprehensive content capture:
# 1. WordPress XML export via admin panel
# Tools > Export > All content > Download Export File
# 2. Database backup for reference
mysqldump -u username -p database_name > wordpress_backup.sql
# 3. Media files via rsync from server
rsync -av /path/to/wp-content/uploads/ ./public/images/
Conversion Pipeline
The WordPress XML export required conversion to markdown. I created a custom processing script based on wordpress-export-to-markdown
:
# Initial conversion
npx wordpress-export-to-markdown --input=export.xml --output=content/blog
Content Cleanup Scripts
After the initial conversion, I built several cleanup scripts to handle WordPress-specific formatting issues:
HTML to Markdown Conversion Script:
#!/usr/bin/env node
// scripts/comprehensive-blog-cleanup.js
const fs = require('fs');
const path = require('path');
function htmlToMarkdown(content) {
// Remove WordPress block comments
content = content.replace(/<!-- wp:[^>]*-->/g, '');
content = content.replace(/<!-- \/wp:[^>]*-->/g, '');
// Convert headings
content = content.replace(/<h([1-6])[^>]*>(.*?)<\/h[1-6]>/gs, (match, level, text) => {
return '#'.repeat(parseInt(level)) + ' ' + text;
});
// Convert code blocks
content = content.replace(/<pre[^>]*><code[^>]*>(.*?)<\/code><\/pre>/gs, '```\n$1\n```');
// Convert images and remove WordPress size suffixes
content = content.replace(/<img[^>]*src="([^"]*)"[^>]*alt="([^"]*)"[^>]*\/?>/gs, (match, src, alt) => {
// Remove WordPress size suffixes like -150x150, -300x200
const cleanSrc = src.replace(/-\d+x\d+(\.[^.]+)$/, '$1');
return ``;
});
// Clean up extra whitespace
content = content.replace(/\n{3,}/g, '\n\n');
return content.trim();
}
// Process all markdown files
const blogDir = '/home/carmelo/html/content/blog';
const files = fs.readdirSync(blogDir).filter(file => file.endsWith('.md'));
files.forEach(file => {
const filePath = path.join(blogDir, file);
const content = fs.readFileSync(filePath, 'utf8');
const parts = content.split('---');
if (parts.length >= 3) {
const frontmatter = parts[1];
const bodyContent = parts.slice(2).join('---');
const cleanedContent = htmlToMarkdown(bodyContent);
const newContent = `---${frontmatter}---\n\n${cleanedContent}`;
fs.writeFileSync(filePath, newContent);
console.log(`Cleaned: ${file}`);
}
});
Missing Posts Extraction Script:
#!/usr/bin/env node
// scripts/extract-missing-posts.js
const fs = require('fs');
function extractMissingPosts() {
const xmlPath = '/home/carmelo/html/resources/carmelosantana.WordPress.2025-07-24-posts.xml';
const xmlContent = fs.readFileSync(xmlPath, 'utf8');
// Extract posts from WordPress XML
const postRegex = /<item>[\s\S]*?<\/item>/g;
let match;
while ((match = postRegex.exec(xmlContent)) !== null) {
const postContent = match[0];
// Extract title, content, date, and slug
const titleMatch = postContent.match(/<title><!\[CDATA\[(.*?)\]\]><\/title>/);
const contentMatch = postContent.match(/<content:encoded><!\[CDATA\[([\s\S]*?)\]\]><\/content:encoded>/);
const dateMatch = postContent.match(/<pubDate>(.*?)<\/pubDate>/);
const slugMatch = postContent.match(/<wp:post_name><!\[CDATA\[(.*?)\]\]><\/wp:post_name>/);
if (titleMatch && contentMatch && dateMatch && slugMatch) {
const slug = slugMatch[1];
const existingFile = path.join(blogDir, `${slug}.md`);
// Only create if file doesn't exist
if (!fs.existsSync(existingFile)) {
const frontmatter = `---
title: "${titleMatch[1].replace(/"/g, '\\"')}"
date: "${formatDate(dateMatch[1])}"
slug: "${slug}"
tags: ${JSON.stringify(extractTags(xmlContent, match.index, match.index + match[0].length))}
excerpt: "${generateExcerpt(contentMatch[1])}"
---`;
const markdownContent = htmlToMarkdown(contentMatch[1]);
const fullContent = `${frontmatter}\n\n${markdownContent}`;
fs.writeFileSync(existingFile, fullContent);
console.log(`Created missing post: ${slug}`);
}
}
}
}
Post-processing challenges solved:
- Frontmatter normalization: Standardized metadata fields across posts
- Image path updates: Converted WordPress media URLs to local references and removed size suffixes
- Category mapping: Transformed WordPress categories to Next.js tags
- URL slug preservation: Maintained SEO-friendly permalinks
- WordPress block cleanup: Removed Gutenberg block comments and HTML artifacts
HTML Archive Creation
I created a complete HTML snapshot of the WordPress site before migration:
# Create complete website mirror for reference
wget --mirror --convert-links --adjust-extension \
--page-requisites --no-parent \
https://carmelosantana.com
This served as both a backup and reference for design consistency during the rebuild.
Image Optimization Pipeline
WordPress creates multiple image sizes for each upload, cluttering the media library. I built scripts to clean this up:
Image Analysis Script:
#!/bin/bash
# scripts/analyze-images.sh
IMAGES_DIR="/home/carmelo/html/public/images"
echo "🔍 Analyzing images in: $IMAGES_DIR"
# Count all images
TOTAL_IMAGES=$(find "$IMAGES_DIR" -type f \( -name "*.jpg" -o -name "*.jpeg" -o -name "*.png" -o -name "*.gif" -o -name "*.webp" \) | wc -l)
CROPPED_IMAGES=$(find "$IMAGES_DIR" -type f \( -name "*.jpg" -o -name "*.jpeg" -o -name "*.png" -o -name "*.gif" -o -name "*.webp" \) -name "*-[0-9]*x[0-9]*.*" | wc -l)
ORIGINAL_IMAGES=$((TOTAL_IMAGES - CROPPED_IMAGES))
echo "📊 Image Statistics:"
echo " Total images: $TOTAL_IMAGES"
echo " WordPress cropped images (to be removed): $CROPPED_IMAGES"
echo " Original images (to be kept): $ORIGINAL_IMAGES"
# Show examples of cropped images that will be removed
echo "🗑️ Examples of WordPress cropped images that will be REMOVED:"
find "$IMAGES_DIR" -type f -name "*-[0-9]*x[0-9]*.*" | head -5 | while read file; do
size=$(du -h "$file" | cut -f1)
rel_path=$(echo "$file" | sed "s|$IMAGES_DIR/||")
echo " $rel_path ($size)"
done
Image Optimization Script:
#!/bin/bash
# scripts/optimize-images.sh
set -e
IMAGES_DIR="/home/carmelo/html/public/images"
BACKUP_DIR="/home/carmelo/html/backup/images-$(date +%Y%m%d-%H%M%S)"
echo "🖼️ Starting image cleanup and optimization..."
# Create backup directory
echo "📦 Creating backup at: $BACKUP_DIR"
mkdir -p "$BACKUP_DIR"
cp -r "$IMAGES_DIR" "$BACKUP_DIR/"
# Remove WordPress cropped images (files with -NxN pattern)
echo "🗑️ Removing WordPress cropped images..."
find "$IMAGES_DIR" -type f \( -name "*.jpg" -o -name "*.jpeg" -o -name "*.png" -o -name "*.gif" -o -name "*.webp" \) -name "*-[0-9]*x[0-9]*.*" -delete
# Optimize remaining images with ImageMagick
if command -v magick &> /dev/null; then
echo "🔧 Optimizing remaining images..."
# Optimize JPEG images
find "$IMAGES_DIR" -type f \( -name "*.jpg" -o -name "*.jpeg" \) -exec bash -c '
file="$1"
echo " Optimizing: $(basename "$file")"
magick "$file" -quality 85 -strip "$file"
' _ {} \;
# Optimize PNG images
find "$IMAGES_DIR" -type f -name "*.png" -exec bash -c '
file="$1"
echo " Optimizing: $(basename "$file")"
magick "$file" -strip "$file"
' _ {} \;
else
echo "⚠️ ImageMagick not found. Install with: brew install imagemagick"
fi
echo "✨ Image cleanup and optimization complete!"
Building the Next.js Foundation
Dynamic Route Architecture
I replicated WordPress's flexible routing using Next.js dynamic routes:
// app/blog/[slug]/page.tsx - Individual blog posts
// app/journal/[slug]/page.tsx - Personal journal entries
// app/projects/[slug]/page.tsx - Portfolio items
This approach mirrors WordPress custom post types, allowing different content types with unique layouts and functionality.
Content Management System
Built a markdown-first CMS using gray-matter
and custom utilities:
// lib/content.ts
export async function getPostData(type: 'blog' | 'journal', slug: string) {
const filePath = path.join(directory, `${slug}.md`)
const fileContents = fs.readFileSync(filePath, 'utf8')
const { data, content } = matter(fileContents)
return {
slug,
...data,
content: await markdownToHtml(content)
}
}
Benefits over WordPress:
- Version control for all content changes
- No database dependencies or corruption risks
- Predictable content structure with TypeScript interfaces
- Fast builds with static generation
Preserving WordPress Features
Permalink Structure
Maintained exact URL compatibility to preserve SEO value:
// next.config.mjs
const redirects = [
{
source: '/2014/02/05/motivation',
destination: '/blog/motivation',
permanent: true,
},
// 200+ individual redirects for legacy URLs
]
RSS Feed Recreation
WordPress RSS functionality was recreated using Next.js API routes:
// app/blog/rss.xml/route.ts
export async function GET() {
const posts = await getAllPosts('blog')
const rss = `<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>Carmelo Santana</title>
<description>Technical lead empowering creators through mindful technology</description>
${posts.map(post => `
<item>
<title>${post.title}</title>
<link>https://carmelosantana.com/blog/${post.slug}</link>
<description>${post.excerpt}</description>
<pubDate>${new Date(post.date).toUTCString()}</pubDate>
</item>
`).join('')}
</channel>
</rss>`
return new Response(rss, {
headers: { 'Content-Type': 'application/xml' }
})
}
Comment System Migration
WordPress comments were preserved as static content within post frontmatter:
---
title: "Post Title"
comments:
- author: "Commenter Name"
date: "2014-02-10"
content: "Original comment content"
---
While losing dynamic commenting, this preserved valuable historical discussions and maintained content completeness.
SEO Metadata Preservation
All WordPress SEO data was migrated and enhanced:
// Dynamic metadata generation
export async function generateMetadata({ params }): Promise<Metadata> {
const post = await getPostData('blog', params.slug)
return {
title: post.title,
description: post.excerpt,
openGraph: {
title: post.title,
description: post.excerpt,
url: `https://carmelosantana.com/blog/${post.slug}`,
images: post.image ? [post.image] : undefined,
},
}
}
Adding Modern Capabilities
LLMs.txt Implementation
One of the most exciting additions was implementing the llms.txt specification for AI-friendly content discovery:
// Auto-generated at build time
const llmsTxt = `# Carmelo Santana - Technical Lead & WordPress Architect
> Technical lead empowering creators through mindful technology.
## Blog & Technical Writing
- [Blog Posts](https://carmelosantana.com/blog.md)
## Digital Experiences
- [Featured Projects](https://carmelosantana.com/projects.md)
`
This enables AI systems to discover and index content more effectively than traditional web scraping. You can see this blog post listed in our site's llms.txt file under the technical writing section.
Markdown API Endpoints
Created direct markdown access for AI consumption:
// app/blog/md/route.ts - Serves raw markdown content
// Accessible via https://carmelosantana.com/blog.md
Automated Build Hooks
Implemented several automation enhancements that run during the build process:
Wayback Machine Snapshots:
// scripts/snapshot-wayback.ts
const targetURL = 'https://carmelosantana.com';
async function triggerSnapshot() {
if (process.env.VERCEL_ENV !== 'production') {
console.log('[Wayback] Skipped snapshot: not in production deployment');
return;
}
const archiveURL = `https://web.archive.org/save/${targetURL}`;
try {
console.log(`[Wayback] Triggering snapshot for ${targetURL}...`);
const res = await fetch(archiveURL);
if (res.ok) {
console.log(`[Wayback] Snapshot triggered successfully`);
} else {
console.error(`[Wayback] Failed with status ${res.status}`);
}
} catch (err: any) {
console.error('[Wayback] Error triggering snapshot:', err.message);
}
}
triggerSnapshot();
llms.txt Auto-Generation:
// scripts/generate-llms-txt.js
const fs = require('fs').promises;
const path = require('path');
const baseUrl = 'https://carmelosantana.com';
const llmsTxtContent = `# Carmelo Santana - Technical Lead & WordPress Architect
> Technical lead empowering creators through mindful technology. WordPress architect, automation expert, and advocate for purposeful digital craftsmanship with over 15 years of engineering experience.
## About
- [Professional Experience](${baseUrl}/about.md)
- [Services Overview](${baseUrl}/services.md)
## Digital Experiences
- [Featured Projects](${baseUrl}/projects.md)
- [Project Portfolio](${baseUrl}/portfolio.md)
## Blog & Technical Writing
- [Blog Posts](${baseUrl}/blog.md)
## Journal & Reflections
- [Journal Entries](${baseUrl}/journal.md)
## Breathwork Practice
- [Breathwork Dashboard](${baseUrl}/breathwork)
- [Breathwork Data](${baseUrl}/api/breathwork/sessions)`;
async function generateLlmsTxt() {
try {
const llmsTxtPath = path.join(__dirname, '..', 'public', 'llms.txt');
await fs.writeFile(llmsTxtPath, llmsTxtContent, 'utf-8');
console.log('✅ Successfully generated llms.txt');
} catch (error) {
console.error('❌ Error generating llms.txt:', error);
process.exit(1);
}
}
generateLlmsTxt();
Breathwork Data Processing:
// scripts/process-breathwork-data.js
function processBreathworkData() {
const csvPath = path.join(__dirname, '../content/wim-hoff-export_results.csv');
const outputPath = path.join(__dirname, '../public/data/breathwork-sessions.json');
const csvContent = fs.readFileSync(csvPath, 'utf-8');
const lines = csvContent.trim().split('\n').slice(1); // Skip header
const sessions = [];
const sessionsByDate = new Map();
lines.forEach((line, index) => {
const [dateStr, retentionTime, round] = parseCSVLine(line);
try {
const date = parseDate(dateStr);
const retentionSeconds = parseRetentionTime(retentionTime);
if (!sessionsByDate.has(dateStr)) {
sessionsByDate.set(dateStr, []);
}
sessionsByDate.get(dateStr).push({
round: parseInt(round, 10),
retentionSeconds
});
} catch (error) {
console.warn(`Skipping invalid line ${index + 2}: ${dateStr} - ${error.message}`);
}
});
// Generate summary statistics
const summary = {
totalSessions: sessionsByDate.size,
totalRounds: Array.from(sessionsByDate.values()).reduce((sum, s) => sum + s.length, 0),
processedAt: new Date().toISOString()
};
const output = { summary, sessions: Array.from(sessionsByDate.entries()) };
fs.writeFileSync(outputPath, JSON.stringify(output, null, 2));
}
GitHub Repository Updates:
// scripts/update-github-repos.ts
async function updateGitHubRepos() {
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error('GITHUB_TOKEN environment variable is required');
}
console.log('🔍 Fetching repositories from GitHub API...');
const response = await fetch('https://api.github.com/user/repos?per_page=100', {
headers: {
'Authorization': `token ${token}`,
'Accept': 'application/vnd.github.v3+json',
},
});
const repos = await response.json();
// Filter and transform repos
const openSourceProjects = repos
.filter(repo => !repo.private && !repo.fork)
.filter(repo => (repo.stargazers_count || 0) >= 1)
.map(repo => ({
title: repo.name,
description: repo.description || 'No description available',
href: repo.html_url,
technologies: repo.language ? [repo.language] : [],
stars: repo.stargazers_count,
forks: repo.forks_count,
lastUpdated: repo.updated_at
}))
.slice(0, 12);
// Update the data file
const dataFilePath = path.join(process.cwd(), 'lib/data.ts');
const dataContent = await fs.readFile(dataFilePath, 'utf-8');
const newOpenSourceArray = `export const openSource = ${JSON.stringify(openSourceProjects, null, 2)}`;
const updatedContent = dataContent.replace(
/export const openSource = \[[\s\S]*?\];?/,
newOpenSourceArray
);
await fs.writeFile(dataFilePath, updatedContent);
console.log('📝 Updated lib/data.ts with latest repositories');
}
Build Integration:
// package.json scripts
{
"scripts": {
"build": "npm run generate-content && next build && npm run post-build",
"generate-content": "npm run generate-llms-txt && npm run process-breathwork",
"generate-llms-txt": "node scripts/generate-llms-txt.js",
"process-breathwork": "node scripts/process-breathwork-data.js",
"post-build": "npm run snapshot-wayback",
"snapshot-wayback": "tsx scripts/snapshot-wayback.ts"
}
}
Deployment and Automation
Git-Based Workflow
The entire site now follows a modern development workflow:
# Local development
git checkout -b new-post
echo "---\ntitle: New Post\n---\nContent" > content/blog/new-post.md
git add . && git commit -m "Add new post"
git push origin new-post
# GitHub PR review and merge
# Automatic Vercel deployment on merge to main
Advantages over WordPress:
- Every change is tracked and reviewable
- Rollbacks are instant git operations
- No plugin conflicts or database issues
- Collaboration through standard development tools
Vercel Integration
Vercel deployment is automatic and optimized:
// vercel.json
{
"buildCommand": "npm run build",
"framework": "nextjs",
"functions": {
"app/api/**": {
"maxDuration": 30
}
}
}
Build process:
- Generate llms.txt from content
- Process markdown to HTML
- Optimize images and assets
- Deploy to global CDN
- Trigger Wayback Machine snapshot
SEO and Search Engine Considerations
Google Search Console Migration
The migration required careful attention to search engine implications. I documented the specific issues encountered and solutions implemented:
Initial GSC Problems:
- 36 pages returning 404 errors
- 25 pages with redirect issues
- Sitemap problems: Old WordPress sitemaps couldn't be fetched
- Spam URLs: Malicious search queries being indexed
Sitemap Updates:
<!-- Generated automatically from content -->
<sitemap>
<loc>https://carmelosantana.com/sitemap.xml</loc>
<lastmod>2025-09-05</lastmod>
</sitemap>
Comprehensive Redirect Strategy:
// next.config.mjs - Redirect mapping
const redirects = [
// Blog post migrations from root to /blog/ subdirectory
{
source: '/2014/02/05/motivation',
destination: '/blog/motivation',
permanent: true,
},
{
source: '/2022/09/25/wordpress-sqlite',
destination: '/blog/wordpress-sqlite',
permanent: true,
},
// Wildcard redirects for blog posts with path suffixes
{
source: '/alpaca-bot/:path*',
destination: '/blog/alpaca-bot',
permanent: true,
},
// Legacy WordPress sitemap redirects
{
source: '/category-sitemap1.xml',
destination: '/sitemap.xml',
permanent: true,
},
{
source: '/post-sitemap1.xml',
destination: '/sitemap.xml',
permanent: true,
},
{
source: '/page-sitemap1.xml',
destination: '/sitemap.xml',
permanent: true,
},
// RSS feed redirects
{
source: '/feed',
destination: '/blog/rss.xml',
permanent: true,
},
{
source: '/rss.xml',
destination: '/blog/rss.xml',
permanent: true,
},
// Legacy category and tag pages to main blog
{
source: '/category/:path*',
destination: '/blog',
permanent: true,
},
{
source: '/tag/:path*',
destination: '/blog',
permanent: true,
}
];
Not having set up redirects in Next.js before, it was lovely not having to use yet another plugin for the most simple of features. The built in redirect functionality in next.config.mjs
made it incredibly easy to handle all legacy URLs without any additional dependencies or complexity.
Middleware for Spam Protection:
// middleware.ts - Block malicious URLs
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
const url = request.nextUrl.clone();
// Block hack-related URLs
const suspiciousPatterns = [
'instahack',
'hack+insta',
'password+cracker',
'hack+followers',
'kung.cc',
'hack+site',
'human+verification'
];
const urlString = url.pathname + url.search;
for (const pattern of suspiciousPatterns) {
if (urlString.toLowerCase().includes(pattern.toLowerCase())) {
console.log(`Blocked suspicious URL: ${urlString}`);
return NextResponse.rewrite(new URL('/404', request.url));
}
}
// Block search URLs with query parameters we don't support
if (url.pathname.startsWith('/search/') && url.search.includes('s=')) {
return NextResponse.rewrite(new URL('/404', request.url));
}
return NextResponse.next();
}
export const config = {
matcher: [
'/((?!api|_next/static|_next/image|favicon.ico).*)',
],
};
Enhanced Robots.txt:
User-agent: *
Allow: /
# Block problematic URLs to save crawl budget
Disallow: /search/*
Disallow: /*?*
Disallow: /out/*
Disallow: /entity/*
# Block hack-related patterns
Disallow: /*instahack*
Disallow: /*hack+*
Disallow: /*password+cracker*
Sitemap: https://carmelosantana.com/sitemap.xml
404 Page Enhancement:
// app/not-found.tsx
export default function NotFound() {
return (
<div className="min-h-screen flex items-center justify-center">
<div className="text-center">
<h1 className="text-4xl font-bold mb-4">Page Not Found</h1>
<p className="text-gray-600 mb-8">
The page you're looking for doesn't exist or has been moved.
</p>
<div className="space-y-4">
<Link href="/" className="block text-blue-600 hover:underline">
Return to Homepage
</Link>
<Link href="/blog" className="block text-blue-600 hover:underline">
Browse Blog Posts
</Link>
<Link href="/projects" className="block text-blue-600 hover:underline">
View Projects
</Link>
</div>
</div>
</div>
);
}
Results of GSC Fixes:
- Reduced 404 errors from 36 to near zero
- Eliminated sitemap fetch errors
- Blocked 100+ spam/malicious URLs from being indexed
- Preserved SEO value through 301 redirects
- Improved crawl budget efficiency
Performance Improvements
Core Web Vitals improvements:
- LCP: 2.1s → 0.8s (static generation)
- FID: 12ms → 3ms (minimal JavaScript)
- CLS: 0.15 → 0.02 (no layout shifts)
Results and Performance
Quantitative Improvements
Build Performance:
- WordPress: 3-5 second page loads
- Next.js: 0.8s average load time
- 100/100 PageSpeed Insights scores
Developer Experience:
- Content creation: Database → File-based
- Deployment: FTP uploads → Git push
- Backups: Database exports → Version control
Infrastructure Costs:
- WordPress hosting: $20/month
- Vercel hosting: $0/month (hobby tier)
Qualitative Benefits
Content Control: Every piece of content is version-controlled and portable
Maintenance: No plugin updates, security patches, or database management
Flexibility: Custom components and layouts without PHP constraints
Modern Tooling: TypeScript, ESLint, modern development workflows
Lessons Learned
What Worked Well
- Gradual Migration: Migrating content in phases reduced complexity
- URL Preservation: Maintaining permalinks preserved SEO value
- Content-First Approach: Focusing on content structure before design
- Automation: Build-time processing reduced manual maintenance
Challenges and Solutions
Missing Dynamic Features:
- Problem: No native commenting or user interaction
- Solution: Static comments + external services for new interactions
Content Discovery:
- Problem: No admin interface for content management
- Solution: VSCode + Git workflow proved more efficient
SEO Transition:
- Problem: Temporary traffic dips during migration
- Solution: Comprehensive redirects and gradual rollout
Would I Recommend It?
Astounding yes. For both all personal projects and clients. Basic web hosting is free at Vercel. Incredibly easy with Dokploy. I love the static generation and writing posts right in Visual Studio Code.
Next Steps
The migration opened up new possibilities:
- Enhanced AI Integration: Expanding llms.txt with structured data
- Content Automation: Automated link checking and content optimization
- Performance Monitoring: Real-time Core Web Vitals tracking
- Advanced Analytics: Custom event tracking without plugin overhead
This migration wasn't just about changing platforms—it was about aligning my technical infrastructure with my content strategy and development workflow. The result is a site that's faster, more maintainable, and better positioned for future enhancements.
If you're considering a similar migration, the key is understanding your specific needs and being realistic about trade-offs. WordPress isn't going anywhere, but for certain use cases, a modern JAMstack approach offers compelling advantages.
Need help with your WordPress migration?
I specialize in WordPress migrations and Next.js development. Let's discuss how to modernize your website while preserving everything that matters.