
The New Rules of Technical Debt: How AI Code Generation Changes Everything About Quality, Testing, and Speed

Leonardo Steffen

You've probably spent hours tweaking meta descriptions and stuffing keywords. Your site's still invisible to Google.
Here's what's actually happening: search engines can't find your content because of how you're rendering it. The real SEO work isn't in the copy: it's in your code.
This guide shows you the implementation patterns, testing tools, and code-level solutions that determine whether your application gets discovered by search engines.
This one choice determines your entire SEO success.
Choose wrong, and you're fighting uphill. Choose right, and search engines roll out the red carpet.
SSR does the heavy lifting upfront: your server builds the entire page before sending it out. When Googlebot hits your URL, it gets fully-rendered content immediately. No JavaScript execution required.
// Next.js SSR implementation
export async function getServerSideProps(context) {
const data = await fetch('https://api.example.com/data');
return {
props: { data }, // Passed to page component
};
}
function Page({ data }) {
return <div>{data.content}</div>;
}Performance characteristics:
Use SSR when you need SEO-critical pages with frequently updating, personalized content. The trade-off is increased server load and higher response times.
SSG pre-renders HTML at build time. Your CDN serves static files, delivering the fastest possible load times and perfect SEO indexability.
export async function getStaticProps() {
const data = await fetch('https://api.example.com/data');
return {
props: { data },
revalidate: 3600, // ISR: Regenerate after 1 hour
};
}When to choose SSG:
CSR throws everything at the browser and says "figure it out yourself." Google's documentation spells this out clearly: this breaks the basic contract between your site and search engines.
Why CSR breaks SEO:
// CSR pattern - problematic for SEO
function Page() {
const [data, setData] = useState(null);
useEffect(() => {
fetch('/api/data')
.then(res => res.json())
.then(setData);
}, []);
return <div>{data ? data.content : 'Loading...'}</div>;
}The hard truth: Google processes only the first 15MB of your initial HTML response.
Everything beyond that gets dropped. Gone. Invisible. Might as well not exist.
Dynamic rendering sounds perfect. Serve pre-rendered HTML to bots, JavaScript to humans. Everyone wins, right?
Wrong. Google calls this a temporary workaround for a reason: you're building technical debt that will bite you later.
Google recommends SSR or SSG instead of maintaining dual codepaths with dynamic rendering.
Every page needs these fundamental elements:
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Page Title - 50-60 characters optimal</title>
<meta name="description" content="150-160 characters">
<link rel="canonical" href="https://example.com/canonical-url">Here's how this looks in practice:
Next.js App Router (Recommended):
export const metadata = {
title: 'Product Name',
description: 'Product description',
robots: {
index: true,
follow: true,
},
openGraph: {
title: 'Product Name',
description: 'Product description',
images: [{
url: 'https://example.com/og-image.jpg',
width: 1200,
height: 630,
}],
},
};React with Helmet:
import { Helmet } from 'react-helmet-async';
function ProductPage({ product }) {
return (
<>
<Helmet>
<title>{product.name} | Your Store</title>
<meta name="description" content={product.description} />
<link rel="canonical" href={`https://example.com/products/${product.slug}`} />
</Helmet>
</>
);
}Dynamic routing requires canonical tags that use absolute URLs, never relative paths or hash fragments.
Next.js Dynamic Routes:
// pages/blog/[slug].js
import Head from 'next/head'
function BlogPost({ post }) {
return (
<>
<Head>
<link
rel="canonical"
href={`https://example.com/blog/${post.slug}`}
key="canonical"
/>
</Head>
</>
)
}Generate sitemaps programmatically from your data sources:
// Next.js API route for dynamic sitemap
export default async function handler(req, res) {
const posts = await getAllBlogPosts()
const products = await getAllProducts()
const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://example.com</loc>
<lastmod>${new Date().toISOString()}</lastmod>
<priority>1.0</priority>
</url>
${posts.map(post => `
<url>
<loc>https://example.com/blog/${post.slug}</loc>
<lastmod>${post.updatedAt}</lastmod>
</url>
`).join('')}
</urlset>`
res.setHeader('Content-Type', 'text/xml')
res.write(sitemap)
res.end()
}Never block CSS or JavaScript files that are required for rendering:
# GOOD - Allow rendering resources
User-agent: *
Disallow: /admin/
Allow: /css/
Allow: /js/
Allow: /images/The URL Inspection Tool is your most important testing resource. It shows you exactly how Google crawls, renders, and indexes your pages.
Testing workflow:
The Rich Results Test validates your structured data implementation. Test both live URLs and code snippets during development:
Here's the reality check: disable JavaScript in your browser. If critical content disappears, search engines may not index it properly.
For SPAs, verify that:
Core Web Vitals directly impact your SEO rankings. As of March 2024, the metrics are: Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). FID was replaced by INP in March 2024.
Target thresholds:
import {onCLS, onINP, onLCP} from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify(metric);
navigator.sendBeacon('/analytics', body);
}
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);Critical distinction: INP can only be measured in the field with real user interactions through Real User Monitoring (RUM). Lab tools like Lighthouse cannot measure it because it requires actual user input, as documented in web.dev's measurement guide.
Implement dynamic imports with proper chunk naming using the magic comment syntax:
// Dynamic import with webpack magic comments
import(/* webpackChunkName: "my-chunk-name" */ './module')
.then(module => {
// Use module
})
.catch(error => {
// Handle loading error
});React lazy loading:
import React, { lazy, Suspense } from 'react';
const HeavyComponent = lazy(() => import('./HeavyComponent'));
function App() {
return (
<Suspense fallback={<div>Loading...</div>}>
<HeavyComponent />
</Suspense>
);
}Prevent layout shifts by specifying image dimensions and using the aspect-ratio CSS property:
<img src="image.jpg" width="800" height="600" alt="Descriptive text">img {
width: 100%;
height: auto;
aspect-ratio: 16 / 9;
}Split JavaScript processing to avoid blocking the main thread:
async function processItems(items) {
for (let i = 0; i < items.length; i++) {
processItem(items[i]);
// Yield to browser every 100 iterations
if (i % 100 === 0) {
await new Promise(resolve => setTimeout(resolve, 0));
}
}
}The problem: Content only accessible through infinite scroll becomes invisible to search crawlers.
Solution: Implement component pages with proper URLs:
// When loading new content via infinite scroll
function loadMoreContent(pageNum) {
fetch(`/api/content?page=${pageNum}`)
.then(response => response.json())
.then(data => {
appendContent(data);
// Update URL without page reload
history.pushState({page: pageNum}, '', `/page/${pageNum}`);
});
}Ensure your server returns 404 status codes for out-of-bounds page numbers, as documented in Google's infinite scroll guidance.
The problem: Server returns 200 status for pages that should return 404, confusing search engines.
Node.js/Express solution:
app.get('/products/:id', async (req, res) => {
const product = await db.findProduct(req.params.id);
if (!product) {
return res.status(404).render('404');
}
res.render('product', { product });
});Client-side alternative:
// For client-side 404 detection
if (!contentExists) {
const meta = document.createElement('meta');
meta.name = 'robots';
meta.content = 'noindex';
document.head.appendChild(meta);
}Dynamic injection pattern:
function updateMetaTags(data) {
document.title = data.title;
const description = document.querySelector('meta[name="description"]');
if (description) {
description.setAttribute('content', data.description);
}
const canonical = document.querySelector('link[rel="canonical"]');
if (canonical) {
canonical.setAttribute('href', data.canonicalUrl);
}
}Fix your robots.txt:
# Allow specific JavaScript files
User-agent: Googlebot
Allow: /js/main.js
Allow: /js/vendor.js
Disallow: /js/admin/Test resource accessibility:
curl -I -A "Googlebot" https://example.com/css/main.cssThe API provides programmatic access to Lighthouse audits:
const {google} = require('googleapis');
const psi = google.pagespeedonline('v5');
async function auditPage(url) {
const response = await psi.pagespeedapi.runPagespeed({
url: url,
key: 'YOUR_API_KEY',
strategy: 'mobile'
});
const score = response.data.lighthouseResult.categories.performance.score;
const lcp = response.data.lighthouseResult.audits['largest-contentful-paint'].numericValue;
return { score, lcp };
}Access search analytics and URL inspection programmatically:
from googleapiclient.discovery import build
service = build('searchconsole', 'v1', credentials=creds)
response = service.searchanalytics().query(
siteUrl='https://example.com',
body={
'startDate': '2024-10-01',
'endDate': '2024-10-30',
'dimensions': ['query'],
'rowLimit': 1000
}
).execute()
for row in response.get('rows', []):
print(f"Query: {row['keys'][0]}, Clicks: {row['clicks']}")Automate performance testing in your CI/CD pipeline:
# GitHub Actions workflow
name: Lighthouse CI
on: [push]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
- run: npm install
- run: npm install -g @lhci/cli
- run: lhci autorunConfiguration (.lighthouserc.js):
module.exports = {
ci: {
collect: {
url: ['https://example.com', 'https://example.com/about'],
numberOfRuns: 3
},
assert: {
assertions: {
'categories:performance': ['error', {minScore: 0.9}],
'categories:accessibility': ['warn', {minScore: 0.9}]
}
},
upload: {
target: 'temporary-public-storage'
}
}
};Automate comprehensive site crawls:
screamingfrogseospider --crawl https://example.com \
--headless \
--save-crawl \
--output-folder /tmp \
--export-tabs "Internal:All"The tool integrates with multiple APIs including PageSpeed Insights, Google Analytics, and Google Search Console for batch analysis.
Should I use SSR or dynamic rendering?
Choose SSR or SSG. Google recommends SSR or SSG over dynamic rendering, which is meant as a temporary workaround. SSR provides better performance for frequently updated content and guaranteed content delivery to crawlers, while SSG provides optimal performance for static or infrequently changing content.
How do I test if Googlebot can render my JavaScript-heavy site?
Use Google Search Console's URL Inspection Tool. Test live URLs and examine the rendered screenshot to see exactly what Googlebot sees after JavaScript execution. Compare this against your expected output to identify rendering failures.
Can I do SEO without a modern framework?
Absolutely. Traditional server-rendered applications often have better SEO out of the box. However, modern frameworks provide helpful abstractions for meta tag management, sitemap generation, and performance optimization that can accelerate development.
How do I handle authenticated pages?
Use the noindex robots meta tag for pages behind authentication:
<meta name="robots" content="noindex, nofollow">Ensure your robots.txt doesn't block the login page itself, as this can prevent Google from understanding your site structure.
What's the minimum viable SEO implementation?
How often should I run automated audits?
Run Lighthouse audits on every deployment. Monitor Core Web Vitals continuously in production. Schedule comprehensive crawls weekly for large sites, monthly for smaller ones. Set up alerts for sudden drops in indexable pages or performance metrics.
Do I need to worry about my tech stack's SEO impact?
Your rendering strategy matters more than your specific framework. SSG provides the best SEO results, SSR is excellent for dynamic content, and CSR requires additional effort and faces significant crawlability challenges. Choose based on your content requirements and SEO needs, not on framework preference alone.
The foundation of technical SEO is making your content accessible to search engines. Everything else—keywords, backlinks, content strategy—builds on that foundation.
Get the technical implementation right first, then optimize from there.

Sergey Kaplich