2024-08-24 by Roel Kristelijn
When your production application suddenly starts throwing Error 1102: Worker exceeded resource limits, it's time for some serious performance detective work. This is the story of how we diagnosed, analyzed, and solved a critical resource limit issue in our Next.js blog deployed on Cloudflare Workers.
Error 1102 Ray ID: 97444d3e0ece590c • 2025-08-24 16:42:14 UTC
Worker exceeded resource limits
Our blog suddenly became inaccessible, throwing this cryptic error. Users couldn't access any pages, and the Worker was consistently hitting resource limits.
Cloudflare Workers have strict resource constraints:
| Resource | Free Plan | Paid Plan |
|---|---|---|
| CPU Time | 10ms | 50ms |
| Memory | 128MB | 128MB |
| Bundle Size | 1MB | 10MB |
| Subrequests | 50 | 1000 |
When any of these limits are exceeded, you get Error 1102.
Let's examine what was happening in our application:
1# Check bundle size 2ls -lh src/data/posts.json 3# Output: -rw-r--r-- 1 staff 226K Aug 24 13:33 posts.json
226KB for a single JSON file! This was our first red flag.
Here's what was happening on every request:
Our posts.json contained:
1{ 2 "posts": [ 3 { 4 "id": "01-creating-nextjs-project", 5 "slug": "creating-nextjs-project...", 6 "title": "Creating a Next.js Project", 7 "excerpt": "Brief description...", 8 "content": "# Very long content with Mermaid diagrams, code blocks, etc..." 9 } 10 // ... 18 more posts with full content 11 ] 12}
Problem: Every page load imported the entire 226KB file, even when only metadata was needed.
| Metric | Impact | Consequence |
|---|---|---|
| Bundle Size | 226KB loaded on import | Slow Worker startup |
| Memory Usage | All posts in memory | High memory pressure |
| CPU Time | JSON parsing + processing | Exceeded 10ms limit |
| Network | Large bundle transfer | Increased latency |
We implemented a two-tier data architecture:
1// scripts/optimize-for-workers.js 2async function optimizeForWorkers() { 3 const postsData = JSON.parse(fs.readFileSync(POSTS_FILE, 'utf8')); 4 5 // Create lightweight metadata 6 const metadata = { 7 posts: postsData.posts.map(post => ({ 8 id: post.id, 9 slug: post.slug, 10 title: post.title, 11 excerpt: post.excerpt, 12 date: post.date, 13 author: post.author, 14 // Remove content field 15 })), 16 slugs: postsData.slugs, 17 generatedAt: postsData.generatedAt 18 }; 19 20 // Save individual content files 21 for (const post of postsData.posts) { 22 const contentFile = path.join(CONTENT_DIR, `${post.slug}.json`); 23 fs.writeFileSync(contentFile, JSON.stringify({ content: post.content })); 24 } 25 26 fs.writeFileSync(METADATA_FILE, JSON.stringify(metadata, null, 2)); 27}
1// src/lib/posts-static.ts 2import postsMetadata from '@/data/posts-metadata.json'; // Only 10.9KB! 3 4const contentCache = new Map<string, string>(); 5 6async function loadPostContent(slug: string): Promise<string> { 7 if (contentCache.has(slug)) { 8 return contentCache.get(slug)!; 9 } 10 11 try { 12 // Dynamic import - only loads when needed 13 const contentModule = await import(`@/data/content/${slug}.json`); 14 const content = contentModule.content || ''; 15 16 contentCache.set(slug, content); 17 return content; 18 } catch (error) { 19 console.error(`Error loading content for ${slug}:`, error); 20 return ''; 21 } 22} 23 24// Metadata only - fast and lightweight 25export function getAllPosts(): Omit<Post, 'content'>[] { 26 return postsMetadata.posts.map(post => ({ 27 id: post.id, 28 slug: post.slug, 29 title: post.title, 30 date: post.date, 31 author: post.author, 32 excerpt: post.excerpt, 33 })); 34} 35 36// Full post with content - loaded on demand 37export async function getPostBySlug(slug: string): Promise<Post | undefined> { 38 const postMeta = postsMetadata.posts.find(post => post.slug === slug); 39 40 if (!postMeta) return undefined; 41 42 const content = await loadPostContent(slug); 43 return { ...postMeta, content }; 44}
1# Before optimization 2Original size: 226.3KB 3 4# After optimization 5Metadata size: 10.9KB 6Space saved: 95.2% (215.4KB)
| Scenario | Before | After | Improvement |
|---|---|---|---|
| Posts listing | 226KB | 10.9KB | 95.2% reduction |
| Single post | 226KB | 10.9KB + ~12KB | ~90% reduction |
| Multiple posts | 226KB | 10.9KB + (n × ~12KB) | Scales linearly |
| Component | Before | After | Savings |
|---|---|---|---|
| Metadata | 226KB | 10.9KB | 215.1KB |
| Content | Included | On-demand | Variable |
| Total Initial | 226KB | 10.9KB | 95.2% |
1// Posts listing: Only metadata needed 2const posts = getAllPosts(); // 10.9KB loaded 3 4// Individual post: Metadata + specific content 5const post = await getPostBySlug('my-post'); // 10.9KB + ~12KB
Problem: Cloudflare Workers have limitations with dynamic imports.
Solution: Use static imports with dynamic paths that are known at build time:
1// This works in Workers 2const contentModule = await import(`@/data/content/${slug}.json`); 3 4// This doesn't work in Workers 5const contentModule = await import(dynamicPath);
Problem: Posts without content need different types.
Solution: Flexible type definitions:
1interface PostCardProps { 2 post: Omit<Post, 'content'> | Post; // Supports both 3}
Problem: Need to run optimization automatically.
Solution: Integrated build pipeline:
1{ 2 "scripts": { 3 "prebuild": "node scripts/generate-posts-data.js && node scripts/optimize-for-workers.js" 4 } 5}
1# Successful deployment output 2✨ Success! Uploaded 7 files (66 already uploaded) (1.77 sec) 3Total Upload: 13450.13 KiB / gzip: 2695.83 KiB 4Worker Startup Time: 24 ms 5Deployed next-blog triggers (1.27 sec) 6https://next-blog.rkristelijn.workers.dev
| Metric | Before | After | Improvement |
|---|---|---|---|
| Error Rate | 100% (Error 1102) | 0% | ✅ Resolved |
| Bundle Size | 226KB | 10.9KB | 95.2% reduction |
| Memory Usage | High | Low | ~95% reduction |
| Startup Time | Slow | Fast | Significantly improved |
| Scalability | Limited | High | Linear scaling |
1# Add to CI/CD pipeline 2npm run build | grep "First Load JS"
Don't load everything upfront. Design for on-demand loading:
1// Good: Load what you need 2const metadata = getPostMetadata(); 3 4// Bad: Load everything 5const allData = getAllDataIncludingContent();
1// For listings: Metadata only 2interface PostSummary { 3 id: string; 4 title: string; 5 excerpt: string; 6 date: string; 7} 8 9// For details: Full content 10interface PostDetail extends PostSummary { 11 content: string; 12}
1// Cache expensive operations 2const contentCache = new Map<string, string>(); 3 4// But don't cache everything 5// Cache only frequently accessed content
1// Add performance monitoring 2console.time('operation'); 3await expensiveOperation(); 4console.timeEnd('operation'); 5 6// Monitor in production 7if (process.env.NODE_ENV === 'production') { 8 // Log performance metrics 9}
1# Gzip content files 2gzip content/*.json 3# Potential 60-80% additional savings
1// Cache content at Cloudflare edge 2const cacheKey = `post-content-${slug}`; 3const cached = await caches.default.match(cacheKey);
1// Load content sections progressively 2const sections = await loadPostSections(slug);
1// Client-side caching for repeat visits 2self.addEventListener('fetch', event => { 3 if (event.request.url.includes('/content/')) { 4 event.respondWith(cacheFirst(event.request)); 5 } 6});
The Error 1102 "Worker exceeded resource limits" taught us valuable lessons about performance optimization in serverless environments. By implementing lazy loading and reducing our bundle size by 95.2%, we not only solved the immediate problem but also created a more scalable architecture.
The solution demonstrates that with careful analysis and strategic optimization, even complex applications can run efficiently within Cloudflare Worker constraints while maintaining excellent performance and user experience.
Our blog now handles traffic smoothly, scales efficiently, and stays well within resource limits - proving that sometimes the best optimization is simply not loading what you don't need.
Performance Stats:
The complete optimization code and scripts are available in our GitHub repository.