<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Ric Smolenski</title>
        <link>https://ricsmo.com</link>
        <description>Personal blog and occasional thoughts from Ric Smolenski.</description>
        <lastBuildDate>Sat, 25 Apr 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved 2026, Ric Smolenski</copyright>
        <item>
            <title><![CDATA[Running Two GPU Batch Jobs on One Circuit]]></title>
            <link>https://ricsmo.com/blog/circuit-breaker-gpu-batch-jobs</link>
            <guid>https://ricsmo.com/blog/circuit-breaker-gpu-batch-jobs</guid>
            <pubDate>Sat, 25 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[The story of tripping a circuit breaker with two GPU batch jobs, the power draw math behind it, and the unexpected culprit.]]></description>
            <content:encoded><![CDATA[
I had two machines running Whisper transcription at the same time. My Mac M2 Max was using Metal to process course transcripts. My PC with a GTX 1080 Ti was using CUDA to do the same thing on a different set of files. Both at full GPU load.

Both plugged into the same electrical circuit.

What happened next was predictable in retrospect. But at the time, I was confused why my PC kept shutting down mid-batch.

## The Symptoms

The first time it happened, I thought it was a bad file. The GTX 1080 Ti was chugging through a course on copywriting and just... died. Black screen. No warning.

I rebooted, restarted the batch, and it crashed again on a different file. Then a third time.

I blamed the Whisper installation. Maybe the CUDA DLL workaround was flaky. Maybe the GPU was overheating. I spent an hour debugging the wrong problem.

Then it happened on the Mac too. Not a crash, but everything went dark for a second. The monitors, the desk lamp, the VoIP phone. All of it. And then it came back.

That's not a GPU driver issue. That's a circuit breaker.

## The Power Draw

Once I figured out what was happening, I did the math on what was actually on that circuit.

**Mac M2 Max under Metal load:** 50-70W
**PC (i7-3970K + GTX 1080 Ti under CUDA load):** 350-400W
**Second MacBook + monitor:** 60W
**UniFi Dream Machine Pro:** 15-20W
**UniFi 24-port PoE switch + 5 Raspberry Pis:** 60-80W
**HDHomeRun, MoCA adapters, Hue hub, VoIP phone:** 20W
**Two laser printers (idle):** 20-30W
**Mini fridge:** 100-150W (when compressor kicks on)

That's roughly 700-850W steady state. Most home circuits are 15 amps, which is 1,800W. On paper, we're fine.

But circuits don't trip at their rated capacity. They trip based on the breaker's time-current curve. A steady 800W load with a sudden spike from the mini fridge compressor cycling on at the same moment the GPU pushes harder during a complex transcription pass? That spike can push you over the threshold momentarily.

## The Real Culprit

The mini fridge.

A mini fridge compressor doesn't run continuously. It cycles on and off, typically pulling 3-5x its rated running power for the first second or two when it kicks on. That inrush current is the spike that pushed the circuit over the edge at exactly the wrong moment.

Everything else on the circuit was fine. The steady-state load was well within limits. But you can't predict when the compressor will cycle, and you can't control when a GPU will spike during inference.

The combination was random but frequent enough to make both batch jobs unreliable.

## The Fix

The solution was simple. I moved the mini fridge to a different circuit.

After that, both machines ran their batch jobs for days without a single crash. The Mac processed 457 files. The PC processed its queue. No reboots, no lost progress, no mystery shutdowns.

## The Lesson for Solopreneurs

This is the kind of problem nobody warns you about when you start running a business from home. There's no course about electrical load balancing. No LinkedIn influencer posting about circuit breaker math.

When you're one person running your own infrastructure, you become responsible for everything. Not just the code and the content, but the physical environment it runs in. Power, cooling, networking, storage. It's all on you.

One of the courses I went through talked about how solopreneurs hit walls when they try to scale alone. The usual advice is about hiring or delegating. But sometimes the wall isn't about people. It's about physics.

I was trying to do two heavy compute jobs simultaneously on consumer-grade electrical wiring in a home office. The constraint wasn't my skill or my tools. It was the building I was sitting in.

The fix cost me nothing. Unplugging a mini fridge and walking it to the kitchen. But diagnosing the problem took hours because I was looking in the wrong place. I assumed it was a software or hardware failure when it was a power infrastructure issue all along.

If you're running serious hardware at home, map your circuits. Know what's on each breaker. It's not glamorous work, but it saves hours of debugging mysterious crashes that have nothing to do with your code.

And if you're running two GPU batch jobs at the same time? Put the mini fridge somewhere else.
]]></content:encoded>
            <category>whisper</category>
            <category>gpu</category>
            <category>hardware</category>
            <category>solopreneurship</category>
            <category>lessons-learned</category>
            <category>homelab</category>
        </item>
        <item>
            <title><![CDATA[Why I Left Cloudflare Workers]]></title>
            <link>https://ricsmo.com/blog/cloudflare-workers-bunny-containers-migration</link>
            <guid>https://ricsmo.com/blog/cloudflare-workers-bunny-containers-migration</guid>
            <pubDate>Sat, 25 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Migrating Notary Atlas from Cloudflare Workers + D1 to Bunny Magic Containers — and why the edge runtime wasn't worth the tradeoffs for a Next.js app.]]></description>
            <content:encoded><![CDATA[
I built Notary Atlas on Cloudflare Workers. Seemed like the right call at the time. Workers promised zero cold starts, global edge deployment, and D1 for a managed SQLite database. Everything a directory app needs. No server to maintain.

Except that's not how it played out.

The project hasn't launched yet. I'm still building it. And after 120 commits, 22 of them debugging a single feature, I decided to rip the entire Cloudflare stack out and start over on Bunny Magic Containers. Same app. Different runtime. Node.js, not edge.

Here's what happened.

## The OG Image Saga

The whole thing started with Open Graph images. Every page on Notary Atlas needs a social sharing image. The standard approach is to generate them at build time or request time using Satori (a library that turns JSX into SVGs, which become PNGs).

Satori works fine in Node.js. Cloudflare Workers, not so much.

Workers run on V8 isolates, not full Node.js. No `node:fs`. No local file access. No native font loading. The way Satori needs fonts is fundamentally incompatible with how Workers handle files.

My first attempt used `@vercel/og`, which wraps Satori. It worked locally. It failed in production. Fonts wouldn't load. Memory constraints kicked in. The image generation would hang or produce corrupted output.

I spent 8 commits trying to fix it. Different font loading strategies. Bundling fonts as base64. Trying alternative image libraries. Nothing worked reliably on the edge runtime.

Eventually I gave up. I removed dynamic OG generation entirely and went with static placeholder images. Then I rebuilt the OG generator as a completely separate Cloudflare Worker with its own deployment pipeline, its own D1 binding, its own rate limiter. Two deployments for one app. Two database connections. Two of everything, because the main app couldn't do what any standard Node.js server does out of the box.

That one feature is what broke the camel's back. Not because it was the only problem, but because it was the most absurd.

## The Other Friction Points

OG images were the loudest problem, but they weren't the only one.

**LibSQL client incompatibility.** The `@libsql/client` library couldn't bundle into the edge worker. I had to use dynamic imports and conditional logic to prevent it from being included in the worker bundle. Then I had to create a wrapper component that dynamically loaded the search feature client-side only, because the database driver didn't work with server-side rendering on Workers.

**No `getCloudflareContext()` was clean.** Every API route needed to call `getCloudflareContext()` to access the D1 binding and other CF-specific bindings. This meant `as any` type casts scattered throughout the codebase because the context types didn't match what Drizzle ORM expected. I had 4+ places where the database was cast to `any` just to make queries work.

**ISR didn't work.** I set up Incremental Static Regeneration, the standard Next.js caching strategy. It caused cache staleness issues with stale data showing on directory pages. I had to add `revalidatePath()` calls in webhooks, then self-fetch hacks to warm the cache. Eventually I ripped it all out. Every page hit D1 on every request. No caching at all.

**The adapter layer.** The entire app ran through `@opennextjs/cloudflare`, a compatibility shim that translates Next.js server features into Cloudflare Workers. It added build complexity, a `.open-next/` directory full of generated artifacts, and its own config file. Another moving part between my code and production.

None of these were dealbreakers on their own. Together, they added up to a codebase full of workarounds and defensive coding. The app worked, but it was held together with duct tape.

## The Decision to Move

What made the decision easy was timing. The project hasn't launched. There are no users. The database has almost no real records. This is the cheapest possible time to change infrastructure.

I'd already built NotaryStyle on Bunny Magic Containers. That project uses Bunny's container hosting with a managed libSQL database. Same database protocol as Cloudflare D1. Same ORM (Drizzle). Same everything on the data layer.

The difference is the runtime. Bunny containers run standard Node.js. Not an edge runtime with restrictions. Not a V8 isolate pretending to be a server. Actual Node.js, with `node:fs`, native font loading, `node-cron`, and every npm package working exactly as documented.

## What the Migration Looked Like

The actual migration was straightforward. Phase 1 was removing everything Cloudflare-specific.

I deleted `wrangler.toml`. Removed `@opennextjs/cloudflare` and `@cloudflare/workers-types` from package dependencies. Deleted the `open-next.config.ts` adapter config. Removed all `getCloudflareContext()` calls and replaced them with `process.env`. Deleted the `DirectorySearchWrapper.tsx` component that existed only to work around the LibSQL bundling issue. Set `output: 'standalone'` in the Next.js config, the standard setting for Docker deployments.

The database connection went from a multi-line conditional block (LibSQL for local dev, D1 for production, with CF context indirection) to three lines. Same `@libsql/client`, same connection string, same everywhere.

Phase 2 was containerizing. Dockerfile with three stages (deps, build, runner), same pattern as NotaryStyle. A `.dockerignore`. An `entrypoint.sh` script for future cron jobs.

Then I moved the OG image generator into the main app. One route handler at `/api/og` using `@vercel/og`. It worked on the first try. The separate OG worker became unnecessary. One deployment instead of two.

## What Changed in Practice

| Before | After |
|--------|-------|
| `getCloudflareContext()` for DB access | `process.env` like any Node app |
| `as any` casts in 4+ places | Clean types throughout |
| Dynamic imports to prevent edge bundling | Standard imports everywhere |
| Separate OG image worker deployment | Single route in main app |
| OpenNext adapter + `.open-next/` artifacts | Standard `next build` |
| `wrangler.toml` + CF-specific env vars | `.env` files |
| No cron support | `node-cron` available |
| No caching (ISR removed) | Can re-enable native ISR |

## The Tradeoff

Bunny containers aren't perfect. Cold starts are a real thing. When a container sits idle, Bunny spins it down. The next request has to wait for it to start up. For a directory app that might not get constant traffic, this means occasional slow first requests.

Cloudflare Workers don't have this problem. They scale to zero with virtually no cold start penalty. That's genuinely impressive.

But for my use case, a directory app that isn't launched yet and will have modest traffic at launch, a cold start of a few hundred milliseconds is fine. The developer experience improvement was worth it.

The other consideration is that Bunny Database is still in public preview. The API could change. But it's built on libSQL, which is stable. The risk is thin.

## The Lesson

Edge runtimes are impressive technology. But they come with real constraints. If your app needs full Node.js features, file system access, native libraries, or anything beyond HTTP request handling, the edge runtime will fight you.

For a simple API or a middleware layer, Workers are great. I'd use them again for that. But for a Next.js app with database connections, image generation, and background jobs, standard Node.js on containers is just simpler.

Sometimes the cutting-edge option isn't the best option. Especially when the boring option works on the first try.

*Building something that needs to actually work? [Let's talk](https://course.coach).*
]]></content:encoded>
            <category>cloudflare</category>
            <category>bunny</category>
            <category>docker</category>
            <category>nextjs</category>
            <category>migration</category>
            <category>edge-computing</category>
            <category>deployment</category>
        </item>
        <item>
            <title><![CDATA[Why I Self-Host My Notes on a NAS]]></title>
            <link>https://ricsmo.com/blog/siyuan-self-hosted-notes-nas</link>
            <guid>https://ricsmo.com/blog/siyuan-self-hosted-notes-nas</guid>
            <pubDate>Sat, 25 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Why I moved from cloud note apps to self-hosted SiYuan Note on a Synology NAS, the privacy and control benefits, and the setup considerations.]]></description>
            <content:encoded><![CDATA[
I've tried a lot of note apps. Google Docs. Evernote. Apple Notes. Notion. Each one worked for a while, then I'd hit a wall.

The wall was always the same: my data lived on someone else's server, structured the way they decided, behind a paywall that could change at any time.

## The Cloud Lock-In Problem

Evernote was the first one I left. The free tier kept shrinking. The features I used got moved behind higher price tiers. And the notes I'd built up over years were trapped in a format I couldn't easily export.

Google Docs was better for collaboration. The search is great. But it's not really a note-taking system. It's a document editor. Try organizing 500+ interconnected notes in Google Drive and you'll see what I mean.

Notion was the closest to what I wanted. Databases, linked pages, templates. But the more I put into it, the more I realized I was building on rented land. Notion can change their pricing, their features, or shut down a product line whenever they want. My notes, my structure, my workflow, all dependent on a company's business decisions.

## What I Wanted

I needed three things from a note system:

1. **Local-first storage.** My data on my hardware, not someone else's.
2. **Linking and structure.** Notes that connect to each other, not just live in folders.
3. **No vendor lock-in.** If I want to leave, I take my data in a standard format.

I also wanted it accessible from anywhere. Local-first doesn't mean local-only.

## SiYuan Note on a Synology NAS

I ended up with [SiYuan](https://github.com/siyuan-note/siyuan). It's an open-source, local-first knowledge base that runs as a self-hosted app. Think Notion's page linking and block editing, but with all data stored as plain files on your own machine.

I run it on my Synology DS1520+ NAS. The NAS sits on my network with five 10TB drives in RAID, so my notes are protected against drive failure. SiYuan runs in a Docker container managed by Portainer.

The API lets other tools interact with it. I use it as the source of truth for curriculum content, operational docs, and project notes across all my businesses.

I also set up a Cloudflare Tunnel so I can reach it from anywhere without exposing ports on my home network. No VPN needed. The tunnel handles HTTPS automatically.

## What Made the Difference

The biggest shift wasn't technical. It was mental.

When you own your infrastructure, you think about your data differently. I started writing more detailed project notes because I knew they'd always be there. I started linking notes across business projects because the system supported it. I stopped worrying about hitting storage limits or feature paywalls.

There's something about knowing your notes are a folder of files on your own hardware. You can back them up however you want. You can script against them. You can grep them from the terminal. You can move them to a different app tomorrow if you want.

That last point is the real one. The freedom to leave is what makes staying worth it.

## The Tradeoffs

Self-hosting isn't for everyone. Here's what I gave up:

**No mobile app.** SiYuan has a web interface that works on mobile browsers, but it's not a native iOS or Android app. If you live in your phone, this will frustrate you.

**You're your own IT department.** Updates, backups, SSL certificates, Docker management. If something breaks at 2am, you're the one fixing it. I've been running servers long enough that this doesn't bother me, but it's real overhead.

**Initial setup takes effort.** Installing Docker on a Synology, configuring the container, setting up the Cloudflare Tunnel, making backups work. This is a weekend project, not a five-minute setup.

**Collaboration is harder.** SiYuan supports sharing, but it's not Google Docs. If you need real-time co-editing with a team, look elsewhere.

## Is It Worth It?

For me, yes. I run multiple businesses and I need a single place where everything connects. Curriculum content, client notes, project planning, technical documentation. Having that live on hardware I control, with an API I can script against, matters.

For someone who just needs to jot down grocery lists and meeting notes? Probably overkill. Apple Notes or Google Keep would work fine.

The middle ground is where it gets interesting. If you're a freelancer, consultant, or solopreneur who accumulates knowledge across projects and needs to find it later, a note system that you own is worth the setup effort. Every note you write today makes the notes you wrote yesterday more useful. That only works if the system is still there in six months, still structured the way you built it, still accessible without paying more for the privilege.

Self-hosting gave me that.
]]></content:encoded>
            <category>self-hosting</category>
            <category>knowledge-management</category>
            <category>synology</category>
            <category>siyuan</category>
            <category>productivity</category>
            <category>privacy</category>
        </item>
        <item>
            <title><![CDATA[Building a Transcription App for iPad]]></title>
            <link>https://ricsmo.com/blog/ipad-whisper-app-native-transcription</link>
            <guid>https://ricsmo.com/blog/ipad-whisper-app-native-transcription</guid>
            <pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Building a native Whisper transcription app on iPad using whisper.cpp and SwiftUI — no server, no Python, no monthly fee.]]></description>
            <content:encoded><![CDATA[
I've been running Whisper on my Mac and Windows machines to transcribe course videos. The Mac uses faster-whisper with Python, the Windows machine uses CUDA on a GTX 1080 Ti. Both work great. But I wanted to run Whisper on my iPad too.

Not through a browser. Not by SSH-ing into another machine. Actually running it natively on the iPad itself.

My iPad is an M1 iPad Pro 11-inch with 8GB RAM. The M1 chip is the same architecture Apple uses in their Macs. It can handle local AI inference. The question was whether I could actually build and deploy an app to use it without paying $99/year for an Apple Developer account.

Turns out you can. It's not pretty, but it works.

## The Stack

The secret ingredient is whisper.cpp. It's a C/C++ port of OpenAI's Whisper model that runs entirely on-device with no server, no API calls, no internet required. It has first-class support for Apple Silicon through Core ML and the ANE (Apple Neural Engine).

The app itself is SwiftUI (Apple's declarative UI framework). whisper.cpp ships with a working SwiftUI example app. The plan was to take that example, strip it down, and add the features I needed: batch file processing, progress tracking, and text export.

For signing, I'd use a free Apple ID through Xcode. The catch: free signing expires every 7 days. I'll get to that.

## The Build

The initial build was straightforward. Clone whisper.cpp, download the small.en model (~466MB, the right balance for 8GB RAM), build the XCFramework, and open the SwiftUI project in Xcode.

I had to install Xcode first. It's free from the Mac App Store but it's a 12GB download. After that, the build steps took about 45 minutes, including writing the custom Swift code.

The features I added on top of the stock example:

- Folder selection that recursively finds all audio files
- Batch queue with checkboxes so you can skip files
- Sequential processing (one at a time to respect the 8GB memory limit)
- Progress bar showing overall and per-file status
- Auto-save transcripts as `.txt` files
- Export all transcripts at once

I built the app to work with a USB drive workflow. I convert videos to WAV files on my Mac using ffmpeg, copy the WAVs to a USB drive, plug the drive into the iPad, and let it process through the queue.

## The Three Bugs That Nearly Killed It

The build compiled clean. The app deployed to the iPad. But the transcription results were blank. Every single file came back empty. What followed was three hours of debugging across three separate bugs.

### Bug 1: The Model Wasn't Actually Loading

The status bar showed green. "Model loaded." But the Resources/models folder was empty. The app was showing a success state for a model that didn't exist. The UI was lying to me.

Fix: Actually bundle the `ggml-small.en.bin` file into the Xcode project, and gate the green status on the model load actually succeeding instead of just setting it to true optimistically.

### Bug 2: Float32 vs Int16

The audio converter was writing WAV files in Float32 PCM format. But whisper.cpp's decoder reads Int16 PCM. So it was interpreting float data as integer data, producing complete garbage, which resulted in empty transcripts.

The fix was a one-line change in the audio format: switch from `pcmFormatFloat32` to `pcmFormatInt16`. One format flag. Hours of head-scratching.

### Bug 3: WAV Header Offset

Even after fixing the format, some files still failed. The decoder was hardcoding byte offset 44 for the audio data start. That's where the standard WAV header ends. But ffmpeg-written WAVs often have extra metadata chunks that push the audio data past offset 44.

Fix: Scan for the actual `data` chunk in the header instead of assuming it starts at a fixed position.

After those three fixes, the app finally worked. Audio file formats will lie to you constantly. Trust nothing. Verify everything.

## The Security-Scope Problem

There was another issue I didn't expect. When the user selects a folder through the iOS document picker, iOS grants temporary security-scoped access to those files. The access expires when the app returns to the main thread.

In practice, this meant the app could read the file list and display it in the queue, but by the time transcription started, the access had expired and every file read failed silently.

My first fix was to copy all the WAVs into the app's sandbox before the access expired. That worked, but it meant transcripts saved to the app's Documents folder instead of back to the USB drive. Then I had to worry about flattening the folder structure so files from different directories didn't overwrite each other.

The better fix was to hold the security-scoped access for the entire session. Read directly from USB, write transcripts directly back to USB, no sandbox copying. One `startAccessingSecurityScopedResource` call held open for the duration of the batch.

## The 7-Day Problem

This is the part nobody warns you about. With a free Apple ID, apps you build and deploy yourself expire every 7 days. After that, they simply won't open. The app is still installed, but iOS refuses to launch it.

The fix is to rebuild and redeploy from Xcode. Connect the iPad, press Cmd+R, wait two minutes. Done. The signing refreshes for another 7 days.

For a personal tool that I use in bursts, this is fine. I transcribe for a few days, then don't touch it for a week or two. When I need it again, I rebuild and deploy. The whole process takes about 90 seconds.

If you want to automate the refresh, there's an app called SideStore that handles it in the background. I haven't set that up yet since the manual refresh is fast enough for how I use it.

## What It's Good For

The M1 iPad Pro with the small.en model handles single-speaker, clear audio well. Course videos, podcasts, interviews with decent recording quality. A 45-minute video takes roughly 20-30 minutes to transcribe.

Multi-speaker content with varying audio quality and heavy accents will choke on small.en. For that, you'd need the medium or large model, but 8GB RAM gets tight with larger models. The Mac with its 32GB unified memory is better suited for those files.

The iPad shines as a portable transcription station. Plug in a USB drive, let it work through a batch, export the transcripts when it's done. Everything stays on the device. No cloud uploads, no API costs, no internet required.

## The Takeaway

You don't need a developer account to build useful native apps for your iPad. whisper.cpp plus SwiftUI plus a free Apple ID gets you a fully functional, on-device AI tool. It takes some MacGyvering (the bugs alone cost me an afternoon), but the result is a transcription tool that runs entirely offline on hardware I already owned.

The 7-day signing limit is annoying but manageable. You also have to pick the right model size for 8GB of RAM. And iOS sandbox security will fight you when you try to read from external drives.

## Update: I Don't Use It Anymore

Shortly after writing this, I switched my batch transcription to distil-large-v3 on the Mac (using faster-whisper with Metal) and turbo on the Windows PC (CUDA on the GTX 1080 Ti). Both models are 4-5x faster than large-v3 with near-identical accuracy. Files that took hours now take seconds.

The iPad experiment was worth doing. I learned a lot about iOS development, whisper.cpp, and how far you can push on-device AI without a developer account. But now that the desktop machines handle the workload easily, there's no reason to fight iPadOS limitations anymore.

If you're in a similar spot, start with the optimized models before going down the iPad rabbit hole. You might not need to.

*If you're building something and keep running into walls, [that's what I do](https://course.coach).*
]]></content:encoded>
            <category>whisper</category>
            <category>ipad</category>
            <category>swift</category>
            <category>transcription</category>
            <category>native-app</category>
            <category>apple</category>
        </item>
        <item>
            <title><![CDATA[150 Blog Posts With an AI Content Workflow]]></title>
            <link>https://ricsmo.com/blog/ai-content-workflow-notarystyle</link>
            <guid>https://ricsmo.com/blog/ai-content-workflow-notarystyle</guid>
            <pubDate>Thu, 23 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How to build an AI content workflow that generates, optimizes, and publishes SEO blog posts at scale — with human quality control at every step.]]></description>
            <content:encoded><![CDATA[
[NotaryStyle.com](https://notarystyle.com) needs content to rank. Notary supplies and training is a niche with real search volume, and to capture that traffic, the site needs pages targeting long-tail keywords.

Writing 150 articles manually would take months of full-time work. I didn't do that. I built an AI content workflow instead.

## The Workflow

Define a topic and target keyword. Generate the article with AI. Inject affiliate links from the product catalog. Add internal crosslinks to related posts. Pull a relevant stock image from Pexels. Review for accuracy and tone. Publish through the admin panel.

Each step is a script or a tool. The only manual part is the review.

## Batch Processing

Batch processing is the key insight. Instead of writing one article at a time, I generate 10 to 20 at a time, then review them in a batch.

Context switching between writing and reviewing is more efficient than writing 150 articles one by one. Generate a batch, review a batch, publish a batch. Repeat.

## Affiliate Link Injection

My product catalog has ASINs, affiliate tags, and category mappings. The injection script matches article keywords to product categories and inserts affiliate links in relevant spots.

Not random placement. Contextual placement based on what the article is actually about. An article about notary seals gets links to seal products. An article about notary journals gets links to journals. The script reads the article content and matches it to the right products.

## Internal Crosslinking

Internal linking matters for SEO. The crosslinking script finds mentions of other article topics within each article and turns them into links. This builds the site's internal link graph, which helps all pages rank better.

An article about notary stamp requirements naturally mentions notary bonds, notary journals, and notary courses. The script turns those mentions into links to the corresponding articles. Each new article strengthens the existing ones.

## Images From Pexels

Stock images come from Pexels. The site has a blog images table that stores Pexels attribution data: the photo ID, photographer name, photographer URL, and CDN path.

The script pulls images based on the article's topic and attaches them with proper photographer credit. No manual image searching, no downloading and resizing, no broken attributions.

## The Admin Panel

The admin panel makes reviewing fast. Markdown editor with live preview. See the article, check the links, adjust the copy, hit publish.

No WordPress backend to navigate. The site runs on Next.js with a custom admin panel built on libsql and Drizzle ORM. It does exactly what I need and nothing I don't.

## Quality Control

Quality control is non-negotiable. AI-generated content still needs a human review.

I check facts. Notary law varies by state, and getting a state-specific detail wrong could actually cause problems for someone. I fix awkward phrasing. AI tends toward generic corporate language that doesn't match the site's tone. I ensure affiliate links point to the right products. A seal article linking to a journal product doesn't help anyone. I verify internal links go to relevant pages.

The AI drafts, I edit. That's the system. It's not fully automated and it's not supposed to be.

## The Bigger Picture

SEO isn't just about content volume. The site also has category pages, product comparison pages, and pillar content that supports the blog posts. The blog is part of a larger content strategy, not the entire strategy.

Each blog post targets a specific long-tail keyword. The category pages aggregate them into broader topics. The product pages capture commercial intent. Together they cover the full funnel from informational search to purchase.

150 articles took days to produce instead of months. The AI handled the drafting. I handled quality control. The scripts handled the repetitive parts. That's the workflow.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>seo</category>
            <category>affiliate-marketing</category>
            <category>ai-content</category>
            <category>automation</category>
            <category>nextjs</category>
        </item>
        <item>
            <title><![CDATA[Why Every Solopreneur Needs a /uses Page]]></title>
            <link>https://ricsmo.com/blog/uses-page-solopreneur</link>
            <guid>https://ricsmo.com/blog/uses-page-solopreneur</guid>
            <pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Why every solopreneur should have a tools/uses page, and how to build one with affiliate links that doesn't look like advertising.]]></description>
            <content:encoded><![CDATA[
A /uses page is exactly what it sounds like. A page where you list the tools, software, and hardware you actually use to run your business. It's been a thing since the early indie hacker days, popularized by sites like uses.tech.

I built mine at ricsmo.com/uses. Nine categories covering my desk setup, audio/video gear, development tools, design software, infrastructure, productivity apps, AI tools, hardware, and learning platforms. It took multiple revisions to get right.

There are two reasons to have one: credibility and revenue.

## Credibility Through Specificity

"I use an Apple M2 Max" is more believable than "I use a powerful computer." Visitors can see the exact tools behind your work. People considering hiring you or buying your course look at your stack and think, this person knows what they're doing.

Specificity builds trust in a way that vague claims don't. Anyone can say they're a developer. Listing Zed, VS Code, and the terminal they use shows they actually are one.

## Revenue Through Honesty

Amazon affiliate links on tools you genuinely use convert better than banner ads or random product placements. The key is the word "genuinely." If you're recommending a microphone you've never used, readers can tell.

The revenue is modest but passive. It won't pay your rent, but it covers hosting costs and then some. For a solopreneur watching every expense, that matters.

The styling of those links matters too. I use accent-colored link text (cyan on light, lighter cyan on dark) with `target="_blank"` and `rel="noopener noreferrer"`. They look like regular navigation, not ads. The moment a link looks like an ad, people stop trusting the page.

## What I Included

My page uses a two-column grid layout. Each section has a heading and a list of tools with brief descriptions and affiliate links where available.

Not everything has an affiliate link. Local Llama models, open-source tools, free services. The page isn't an ad. It's a transparent list of what I use.

I also added a "Familiar with" section for tools I've evaluated but don't use daily. This is useful for clients who want to know if I can work with their existing stack. It lists enterprise learning platforms like Blackboard, eCollege, Canvas, and Moodle, along with course platforms like Teachable, Kajabi, and Thinkific.

Twenty-five years in education means I've touched a lot of systems. Listing them shows range without claiming expertise I don't have.

## What I Cut

The hardest part was deciding what to include and what to remove.

"I tried this once" doesn't belong on a /uses page. Only tools you use regularly make the cut.

It took multiple revisions to get there. I removed tools I no longer use, like Roo Code, Kilo Code, and Cursor. I added ones I'd forgotten, like my iPad Pro, Apple Watch, and Yealink desk phone. I separated items that were incorrectly grouped on the same line, like Z.ai and OpenRouter.

Each revision made the page more honest. Less "look at everything I've tried" and more "here's what actually runs my business."

## Living Document

The page gets updated when tools change. That's a feature, not a bug. If someone visits my /uses page six months apart and sees the same list, something's wrong. Tools change. The page should reflect that.

A /uses page works because it's useful to the visitor and honest from the author. Get both of those right and the affiliate revenue follows naturally.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>affiliate-marketing</category>
            <category>uses-page</category>
            <category>tools</category>
            <category>solopreneurship</category>
            <category>web-design</category>
        </item>
        <item>
            <title><![CDATA[A Cross-Platform Transcription Toolkit]]></title>
            <link>https://ricsmo.com/blog/building-transcription-toolkit-whisper</link>
            <guid>https://ricsmo.com/blog/building-transcription-toolkit-whisper</guid>
            <pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A portable AI transcription toolkit using faster-whisper for Mac (Metal) and Windows (CUDA) — turn hundreds of course videos into searchable text.]]></description>
            <content:encoded><![CDATA[
I had a problem. Over the past few years, I bought dozens of online courses. Hundreds of hours of video across business strategy, marketing, web development, and course creation. All stored on an external drive. All impossible to search.

I wanted to build a personal knowledge base where I could search across all that content semantically — ask a question and get relevant answers from any course I've ever taken. The first step was getting all those videos transcribed.

The challenge: I have two machines, and they're very different.

- **Mac:** Apple M2 Max, 32GB unified memory, Metal GPU
- **Windows PC:** Intel i7-3970K, NVIDIA GeForce GTX 1080 Ti (11GB VRAM), CUDA

I needed a transcription toolkit that could run on both, use the GPU on each, and process files in bulk with minimal babysitting.

## Why Not Just Use a Service?

There are plenty of transcription services out there. Upload a video, get a transcript back. Some are even free for short clips.

But I had over 1,000 video files. At typical service pricing, that gets expensive fast. More importantly, I didn't want to upload hundreds of gigabytes of purchased course content to someone else's servers.

Running transcription locally solved both problems. No cost per file, no privacy concerns, and once the model is downloaded, it works offline.

## Choosing the Engine

OpenAI's Whisper is the go-to open-source transcription model. There are two main ways to run it:

**openai-whisper** (the original): Built on PyTorch. Supports CUDA on NVIDIA GPUs. Does not support Metal/MPS on Apple Silicon. On a Mac, it falls back to CPU, which is painfully slow.

**faster-whisper** (CTranslate2): A reimplementation that's 4x faster and uses less memory. The key advantage for my use case: it automatically uses Metal on Apple Silicon, even when you don't explicitly configure it. On the Windows side, it supports CUDA just like the original.

I tested both. The original Whisper on Mac CPU processed one video in roughly 17 hours. faster-whisper on Mac Metal processed the same video in about 40 minutes. That's not a small difference. That's the difference between finishing this project and giving up.

faster-whisper was the obvious choice.

## The Toolkit

I built a single Python script called `transcribe.py` that handles the whole pipeline:

- Scan a directory (and subdirectories) for video and audio files
- Process each file through faster-whisper
- Save the transcript as a plain text file next to the original
- Optionally delete the source file after transcription (to free up disk space)
- Log everything for review

The key design goal was portability. The same script runs on both machines with zero code changes. The only difference is the hardware it runs on, and faster-whisper handles that automatically.

### Model Selection

Whisper comes in several sizes: tiny, base, small, medium, large, and large-v3. Larger models are more accurate but slower and use more memory.

I went with large-v3, the largest model available. It's slower and uses more memory, but these aren't polished studio recordings. A lot of the course content is group conference calls with multiple speakers, varying audio quality, and different accents. The smaller models choked on that. Large-v3 handles it reliably. It handles technical terminology well, which matters when you're transcribing courses about APIs, Docker, and instructional design.

The large-v3 model file is about 2.9GB. Download once, use forever.

### File Discovery

The script walks through the target directory recursively. It recognizes common video formats (mp4, mkv, mov, avi, webm) and audio formats (mp3, wav, m4a, flac, ogg). For each file, it checks if a `.transcript.txt` already exists and skips it if so, making the process resumable.

This mattered a lot. With 1,000+ files, you don't want to restart from scratch every time something interrupts the run.

### Prefix Handling

Course videos are usually organized in folders like:

```
courses/
  Course Name 01/
    Module 1/
      01-Introduction.mp4
      02-Getting Started.mp4
  Course Name 02/
    ...
```

When you transcribe all these files into a flat folder or a vector database, you lose the folder structure context. You end up with hundreds of files named `01-Introduction.transcript.txt` from different courses, with no way to tell them apart.

I added a `--prefix-depth auto` option that automatically detects how deep the course folders are and prepends the folder names to each transcript. So `01-Introduction.mp4` becomes `Course Name 01_Module 1_01-Introduction.transcript.txt`. Now every transcript is unique and identifiable.

## The Windows Side

Setting up the Windows machine was more involved. The main challenge: CUDA support requires specific versions of Python, PyTorch (or CTranslate2), and NVIDIA drivers that all play nicely together.

Some things I learned the hard way:

**Python 3.13 doesn't work.** It's too new for the current PyTorch CUDA builds. Python 3.12 is the sweet spot.

**NVIDIA's CUDA toolkit alone isn't enough.** You also need the cuBLAS library. A simple `pip install nvidia-cublas-cu12` fixes the "DLL not found" errors that would otherwise crash everything.

**PATH matters.** The CUDA DLLs need to be findable at runtime. Adding `nvidia\cublas\bin` to the system PATH in the batch script wrapper resolves this.

Once everything was configured, the GTX 1080 Ti (despite being a 2017 GPU) handled large-v3 without issue. 11GB of VRAM is plenty for this model.

## The Mac Side

The Mac setup was trivial by comparison. Install faster-whisper, point it at the directory, and go. Metal acceleration is automatic. The M2 Max handled large-v3 easily — 32GB of unified memory means it never runs out of room.

The bottleneck on Mac was sheer quantity. With hundreds of files to process, even at 40 minutes each, it takes days. I let it run in the background and checked progress periodically.

## What This Enabled

With all the transcripts generated, I loaded them into a vector database (Qdrant, running in a Docker container). Now I can search across every course I've ever taken using natural language:

"Show me everything about email marketing funnel optimization"
"Find discussions about pricing online courses"
"What do these courses say about SEO for membership sites?"

It's like having a personal research assistant that's read every course I own. When I'm writing blog posts, building course content, or solving a client problem, I can pull in relevant insights from my entire learning library.

## For Anyone Considering This

If you're sitting on a library of video content that you can't search, local transcription is worth the setup effort. The tools are free, the models are good enough for most purposes, and once it's running, it's mostly hands-off.

The main investment is time. If you have hundreds of files, plan for this to run for days. Start with the files you need most. Let it work through the rest in the background.

And if you have both a Mac and a Windows machine with a GPU, run them in parallel. Split the files in half and let both machines work. That's what I did.

---

If you're building systems to organize and leverage your knowledge, or if you want to talk about automating parts of your course business, [book a call](https://connect.course.coach/widget/bookings/book-a-call-course-coach).
]]></content:encoded>
            <category>whisper</category>
            <category>transcription</category>
            <category>AI</category>
            <category>GPU</category>
            <category>cross-platform</category>
            <category>automation</category>
            <category>knowledge base</category>
        </item>
        <item>
            <title><![CDATA[Deploying Static Sites to Cloudflare Pages]]></title>
            <link>https://ricsmo.com/blog/cloudflare-pages-deployment</link>
            <guid>https://ricsmo.com/blog/cloudflare-pages-deployment</guid>
            <pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Why Cloudflare Pages is the best default for static Next.js sites — automatic builds, preview deploys, free bandwidth, zero maintenance.]]></description>
            <content:encoded><![CDATA[
Static sites don't need a server. HTML, CSS, JS, and images. Serve them fast and call it done.

I've deployed two sites to Cloudflare Pages now: [course.coach](https://course.coach) and ricsmo.com. Both are Next.js static exports. The workflow is identical for both. Push, build, deploy. Same steps every time.

## How It Works

Cloudflare Pages deploys straight from GitHub. Push to main and it builds and deploys automatically. Push to any other branch and it creates a preview deploy with a unique URL. No FTP, no SSH, no manual steps.

The build environment is fast. Next.js static export on Cloudflare's builders completes in under two minutes for a site with 30+ pages.

The free tier includes unlimited bandwidth, unlimited requests, and 500 builds per month. That's hard to beat.

## Preview Deploys

Preview deploys are the feature I didn't know I needed. Every pull request gets its own URL. You can share it with someone for review without touching production.

This replaced the need for a staging environment. I don't have to maintain a separate deployment for testing. Branch, push, review on the preview URL, merge when ready.

## Custom Domains

Custom domains are one-click. Point your DNS to Cloudflare (or use Cloudflare DNS, which is what I do), add the domain in Pages, done. SSL is automatic. No certificate management, no renewal dates, no email reminders.

## What I Left Behind

I used Bunny Magic Containers for [NotaryStyle](https://notarystyle.com). Docker is great for apps that need a runtime, like a database connection or server-side logic. But for static sites, running a Docker container is overkill. You're paying for compute you don't use.

Before that, [NotaryStyle](https://notarystyle.com) was on shared hosting. PHP, cPanel, plugin updates, security patches. Static sites have none of that overhead. No PHP version to maintain. No plugin compatibility matrix. No backup scripts for MySQL.

I've also used VPS setups where you manage everything yourself. Updates, security, backups, monitoring. Cloudflare handles all of it.

## The Catch

There's a catch with static export. No server-side features. No dynamic routes, no API routes, no server-side image optimization. Everything has to be generated at build time.

RSS feeds, sitemaps, OG images all need to be build scripts that write static files, not API routes. I run all of mine as part of the build pipeline: generate the OG images, optimize other images, build the RSS feed, build the sitemap, then run `next build`.

But for a blog or marketing site, you don't need server-side features. Build it once, serve it forever.

## The Stack

Both [course.coach](https://course.coach) and ricsmo.com use the same setup: Next.js with `output: "export"`, Tailwind CSS, MDX content, Cloudflare Pages. Push, build, deploy.

When I want to add a new blog post, I create an MDX file, commit it, and push. Cloudflare builds the site including the new OG image, the updated RSS feed, the updated sitemap, and deploys it. No manual steps in between.

That workflow is hard to beat for a solopreneur running multiple sites.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>cloudflare</category>
            <category>deployment</category>
            <category>nextjs</category>
            <category>static-site</category>
            <category>web-development</category>
        </item>
        <item>
            <title><![CDATA[Auto-Generating Social OG Images]]></title>
            <link>https://ricsmo.com/blog/og-images-satori-sharp</link>
            <guid>https://ricsmo.com/blog/og-images-satori-sharp</guid>
            <pubDate>Sun, 19 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How to auto-generate Open Graph images for every blog post using Satori, sharp, and Next.js build scripts.]]></description>
            <content:encoded><![CDATA[
When someone shares your blog post on Twitter, LinkedIn, or Slack, the image that shows up next to the link is the OG image. Most people skip it entirely or fire up Canva for every single post.

I got tired of both options. So I built a system that generates one for every blog post, automatically, during the build process.

## The Stack

Satori and sharp. Satori is a JSX-to-SVG renderer from Vercel. Sharp is an image processing library that converts SVG to PNG.

Together they let you create images with code, using the same typography and layout system as your site. No Canva, no manual exports, no "I'll add the image later" that turns into never.

The design itself is simple. Dark gradient background, thin accent-colored bar on the left, blog post title in Inter 800 weight, small site name at the bottom. I based it on an OG image generator I built for another project using Cloudflare Workers and a similar approach.

## Font Handling

Satori needs actual font files, not Google Fonts URLs. You can't just point it at a CDN and have it work.

I cache the Inter 800 weight locally at `.cache/Inter-800.woff`. The first build downloads it, and subsequent builds reuse it from disk. The font file is about 40KB. Not worth fetching on every build.

## How It Works

The script reads all blog post MDX files, pulls the title and slug from frontmatter, generates an SVG via Satori, then converts it to a 1200x630 PNG via sharp. Output goes to `public/images/og/{slug}.png`.

Since I use Next.js static export (`output: "export"`), there's no server-side image optimization. These are just static files served from `public/`, referenced in each blog post page's metadata.

The blog post page auto-references `/images/og/{slug}.png` when no `featuredImage` is set in frontmatter. For Twitter card metadata, I use `summary_large_image` for all blog posts. The OpenGraph and Twitter meta tags are set in `generateMetadata`.

## Two Gotchas

First: Satori doesn't support `width: "fit-content"`. This one cost me a while. Use `"auto"` instead. Satori has a specific subset of CSS it supports, and fit-content isn't in it.

Second: Next.js type-checks all TypeScript files during build, including scripts in your `scripts/` directory. Satori's JSX object types don't satisfy ReactNode, so the build fails with type errors.

The fix is ugly but effective: `// @ts-nocheck` at the top of the script. It bypasses the type checker for that file only. Not ideal, but Satori's types aren't going to match React's expectations anytime soon.

## Build Pipeline

The script runs first in the build pipeline, before `next build`:

```
npx tsx scripts/generate-og-images.ts && next build
```

If image generation fails, the build stops before producing a site with missing images. That's the right behavior. I'd rather have a failed build than a live site with blank social previews.

The full pipeline generates OG images, optimizes other images, builds the RSS feed, builds the sitemap, and then runs `next build`. All sequential, all at build time.

## The Result

Every blog post gets a consistent, branded social image without me touching Canva. Write a new post, push to GitHub, image generates during build, site deploys with the image ready for sharing.

It's one of those things that feels like overkill to set up and then you wonder how you lived without it.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>nextjs</category>
            <category>og-images</category>
            <category>satori</category>
            <category>sharp</category>
            <category>web-development</category>
        </item>
        <item>
            <title><![CDATA[A Consulting Website With No Monthly Fees]]></title>
            <link>https://ricsmo.com/blog/static-course-site-nextjs-mdx-cloudflare</link>
            <guid>https://ricsmo.com/blog/static-course-site-nextjs-mdx-cloudflare</guid>
            <pubDate>Sat, 18 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Building a consulting website as a static site with Next.js, MDX, and Cloudflare Pages — zero hosting costs, git-based workflow, no database.]]></description>
            <content:encoded><![CDATA[
I run a consulting business helping people build online courses. I needed a website. Not a course platform, not a membership site — a marketing site with a homepage, blog, and a way for people to book a call with me.

The obvious answer would have been WordPress. I've built dozens of WordPress sites over the years. Right now I run three training websites on WordPress — one with Sensei LMS, two with LearnDash — and I've hosted plenty more in the past. But for this, I wanted something different.

I wanted something I'd never have to maintain.

## Why Not WordPress?

WordPress is the right tool for a lot of things. My notary education site uses it with Sensei LMS and handles thousands of courses and students. But for a simple marketing site, it felt like bringing a cannon to a knife fight.

- **Database to maintain.** MySQL needs updates, backups, and occasional repair. WordPress saves everything, too. Every post revision, every draft you'll never use, every plugin setting. Over time that database bloats and slows down your site.
- **Security surface area.** WordPress powers 43% of the web, which makes it target number one for exploits. Every plugin you install is another potential entry point.
- **Hosting costs.** Decent WordPress hosting runs $20-50/month for a site that gets modest traffic. And that's before you add a CDN, which you'll want for performance.
- **Plugin updates.** Stay on top of them or get hacked. Ignore them long enough and updating becomes its own project. Update the wrong one and something breaks. It's a constant maintenance treadmill.

I wanted to version control my content, deploy with a git push, and never think about server maintenance again. Like the site version of "set it and forget it."

## The Stack

- **Next.js 16** — React framework with static site generation
- **MDX** — Markdown files with JSX components for blog posts
- **Tailwind CSS v4** — Utility-first styling
- **Cloudflare Pages** — Free hosting, global CDN, automatic deploys from GitHub

No database. No server. No PHP. Just static HTML files served from a CDN.

## How Content Works

Every blog post is a markdown file with some frontmatter at the top:

```markdown
---
title: "Why Most Online Courses Fail"
date: "2026-01-15"
category: "Course Creation"
tags: ["course-creation", "online-courses"]
excerpt: "Most courses fail because..."
---

The post content goes here in plain markdown.
```

No admin panel. No WYSIWYG editor. I write in whatever text editor I'm already using, commit to git, and the site rebuilds. If I want to embed a component in a post — a callout box, a code block, a custom layout — MDX lets me drop in JSX right in the markdown.

## Static Means Pre-Built

Next.js generates every page as static HTML at build time. When someone visits a blog post, they're getting a pre-built HTML file from Cloudflare's CDN. No server processes their request. No database query runs.

Page loads are fast. The site can handle traffic spikes without breaking. Hosting costs are zero.

## But What About Dynamic Stuff?

RSS feeds and sitemaps need to update when content changes. I handle that with build scripts that run before `next build`:

```
node scripts/generate-rss.mjs
node scripts/generate-sitemap.mjs
next build
```

The scripts read the markdown files, generate RSS and sitemap XML, and dump them into the `public/` folder. They become static files like everything else.

SEO metadata? Each page exports a metadata object with OpenGraph tags, Twitter cards, canonical URLs, and JSON-LD structured data. All injected at build time.

Booking and contact? I use GoHighLevel for my CRM and appointment scheduling. The site embeds GHL's calendar directly through an iframe. I tested multiple iframe heights before settling on 950px — anything shorter and the user has to scroll inside the widget. Those details matter.

## Deployment

The whole site lives in a GitHub repo. When I push to main, Cloudflare Pages:

1. Detects the framework (Next.js)
2. Runs the build command
3. Deploys the static output to their CDN

Preview deploys on every pull request. Rollbacks in one click. I haven't touched a server since setting it up.

## The Tradeoffs

Going static means giving up some things:

- **No server-side rendering.** If I need user accounts or dynamic content, that's a separate service.
- **No admin panel.** Editing content means editing markdown and pushing to git. Fine for me, but a non-technical client would struggle.
- **No server-side search.** Site search needs a third-party tool or a JavaScript-based solution. For a site with under 50 posts, a tag-based archive works fine.

None of these matter for a consulting business site. If I need dynamic features later, I can add Cloudflare Workers or edge functions without touching the core site.

## What It Actually Costs

- **Hosting:** $0 (Cloudflare Pages free tier)
- **Domain:** Whatever your registrar charges (I already owned [course.coach](https://course.coach))
- **Time to set up:** Under a day, start to finish
- **Monthly maintenance:** None

Compare that to a WordPress site: hosting, security plugins, backups, updates, performance optimization, a CDN to keep page loads reasonable, image compression to save server space, and scheduled database maintenance to keep things from slowing down. It adds up in both money and time.

## Was It Worth It?

Absolutely. Sub-second page loads from a global CDN. Git-based workflow where I edit, commit, push, and it's live. No security patches, no database backups, no plugin compatibility issues. The best infrastructure is the infrastructure you don't have to think about.

The site I built is at [course.coach](https://course.coach). Same stack powers my personal site at [ricsmo.com](https://ricsmo.com). Both have a homepage, blog, about page, and everything a consulting business needs — nothing it doesn't.

If you're building a marketing site for a small business — whether it's consulting, freelancing, or courses — static is worth a look. The tooling has gotten good enough that you don't need WordPress for everything anymore.
]]></content:encoded>
            <category>nextjs</category>
            <category>static-site</category>
            <category>cloudflare</category>
            <category>solopreneurship</category>
            <category>course-creation</category>
        </item>
        <item>
            <title><![CDATA[Adding Share Buttons to a Next.js Blog]]></title>
            <link>https://ricsmo.com/blog/share-buttons-nextjs</link>
            <guid>https://ricsmo.com/blog/share-buttons-nextjs</guid>
            <pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[The surprisingly fiddly details of adding social share buttons to a Next.js blog — icon library pitfalls, copy feedback UX, and share URL quirks.]]></description>
            <content:encoded><![CDATA[
I added share buttons to my blog this week. X, LinkedIn, Facebook, Reddit, and copy-link.

Five buttons. Should take an hour, maybe two. It took way longer.

## Beyond X and LinkedIn

I started with just X and LinkedIn. Obvious starting point for a solopreneur blog. But in tech and education niches, the actual sharing happens on Facebook and Reddit too. So I added those.

The copy-link button was the most "designed" one. You click it, it copies the URL, and the icon swaps from a clipboard to a checkmark so you know it worked. Simple concept. Fiddly execution.

## Icon Library Headaches

I used Lucide React for icons throughout the site. Lucide has a strict policy against including brand logos, so Twitter and LinkedIn icons were never available in the package. I had to use inline SVGs for those two instead of the icon component.

For the copy-link button, I grabbed the clipboard icon from Heroicons (outline, 24x24). It matched the stroke style of the other social icons well enough. The checkmark needed to be slightly different in size (20x20) to feel right at the same visual weight.

## Vertical Alignment Is the Real Time Sink

The social icons and the copy-link icon sit in a flex row together. Getting them to line up took longer than anything else.

`flex items-center` fixed it, but only after I tried `align-middle`, `vertical-align`, and manual padding on individual icons. Each approach fixed one icon and broke another.

The clipboard icon from Heroicons and the inline SVGs for X and LinkedIn all have slightly different viewBox dimensions. When they're side by side at the same pixel size, they don't visually align. One sits high, one sits low, one looks right. CSS flex alignment handles it, but finding the right combination took experimentation.

## The Copy Feedback

The component is a client component (`'use client'`) because it uses `useState` for the copy feedback and `navigator.clipboard.writeText()`.

When someone clicks copy, the clipboard icon swaps to a checkmark for two seconds, then swaps back. The checkmark needed to be the right size relative to the clipboard icon so the text label next to it doesn't jump.

## Share URL Quirks

Share URLs are straightforward in theory but each platform has quirks.

X has a 280-character limit on the text parameter. LinkedIn's share URL works but doesn't allow custom prefill text anymore. Facebook's share dialog requires a valid URL that has proper OG tags or it shows a blank preview. Reddit needs either a specific subreddit or you send users to a subreddit picker.

Each share button opens a small popup window centered on screen instead of navigating away from the page. Standard pattern, but the window sizing and centering math has to be right. Too wide and it looks weird on mobile. Too narrow and the content clips.

## Mobile Touch Targets

Mobile was its own thing. The share buttons need to be touch-friendly. Not just the tap target size, but the visual feedback. People need to know their tap registered before the popup appears.

On mobile, the popup approach doesn't work as well because browsers handle popup windows differently. The native Web Share API would be ideal, but browser support is inconsistent and it doesn't let you customize which platforms appear.

I kept the popup approach for desktop and made sure the tap targets were large enough for mobile. Not perfect, but good enough for now.

## The Lesson

Five buttons. Icon deprecations, vertical alignment headaches, clipboard feedback, popup math, mobile touch targets. That's what "simple" turns into when you actually build it.

Nothing on the web is as simple as it looks from the outside. Every "just add share buttons" feature is a collection of small decisions that each take time to get right.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>react</category>
            <category>nextjs</category>
            <category>web-development</category>
            <category>ux</category>
        </item>
        <item>
            <title><![CDATA[Rebuilding NotaryStyle in Next.js]]></title>
            <link>https://ricsmo.com/blog/wordpress-to-static-migration</link>
            <guid>https://ricsmo.com/blog/wordpress-to-static-migration</guid>
            <pubDate>Thu, 16 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Lessons from rebuilding a WordPress affiliate site as a Next.js application with libsql, Drizzle ORM, and Bunny Magic Containers.]]></description>
            <content:encoded><![CDATA[
[NotaryStyle.com](https://notarystyle.com) started as a standard WordPress site with affiliate links and product recommendations for notaries.

I decided to rebuild it as a custom Next.js application. Not a static site, not a headless CMS, but a full app with its own database, admin panel, and automated deployment pipeline.

Here's what actually happened.

## Why I Left WordPress

WordPress stores everything in a database: posts, revisions, settings, plugin configs, user data. Over years that database bloats with post revisions, abandoned drafts, spam comments, and transient data. Every plugin adds tables. Every update risks breaking something.

The maintenance treadmill never stops. Security patches, plugin compatibility, PHP version updates, backup scripts. For a site that's essentially a blog with affiliate links, the overhead was outsized.

Before pulling the trigger, I searched my transcript knowledge base using [Qdrant](https://qdrant.tech/) for WordPress pain points. The data confirmed what years of running WordPress sites had already taught me. I needed to get off it.

## What I Built

The new [NotaryStyle](https://notarystyle.com) is a Next.js application using:

- **libsql** (a fork of SQLite) as the database, accessed through **Drizzle ORM**
- **Bunny Magic Containers** for deployment (Docker containers triggered by GitHub pushes)
- **Bunny Storage** as a CDN for blog images
- An admin panel with a full Markdown editor for managing posts, products, categories, and affiliate links
- Automated Docker builds: push to GitHub, container rebuilds, deploys automatically

The database has tables for blog posts, blog categories, products, product categories, product cache (populated from the Amazon API), admin users, and sync logs. Blog posts store MDX content directly in the database alongside metadata like slug, tags, category, featured image, and publication status.

I used Drizzle ORM for type-safe queries instead of raw SQL. The schema lives in TypeScript files, which means the database structure is version-controlled right alongside the application code.

## Writing Posts With AI

The old WordPress site had some content, but most of what's on the site now was written for the Next.js app using an AI content workflow.

I built scripts that could generate blog articles, inject affiliate links, add internal crosslinks, and pull relevant stock images. Each batch went through a review pass where I'd check accuracy, fix affiliate link placement, and adjust the tone. Then straight into the database.

The AI handled the heavy lifting of drafting. I handled quality control. The admin panel made reviewing and publishing each post fast.

## Images

Blog images live on Bunny Storage CDN. The Next.js config includes a rewrite rule that proxies any request to `/images/*` to `images.notarystyle.com`, so image references in blog post content work without needing to update every file individually.

## The Amazon API Deprecation

Just as I finished the rebuild and got everything running smoothly, Amazon deprecated the Product Advertising API version I was using. The product cache system that automatically fetched pricing, availability, and product details from Amazon stopped working.

Every product listing on [NotaryStyle](https://notarystyle.com) depended on that API. Display prices, availability status, ratings, review counts. All of it.

I had to migrate to the new API version, update the authentication flow, restructure the product cache sync scripts, and deal with new rate limits and response formats. The migration itself was straightforward code-wise, but it meant touching the product catalog system, the sync scripts, and the admin panel's product management tools.

The timing couldn't have been worse. Or better, I suppose. If I'd still been on WordPress, I would have been at the mercy of whatever plugin handled Amazon product data. Instead, I owned the integration end to end. I updated the code, pushed to GitHub, and the containers rebuilt automatically.

## What I'd Do Differently

Don't wait until your WordPress site is big to rebuild. Every post you create in WordPress is content you'll eventually need to deal with. The migration debt compounds.

If I were starting [NotaryStyle](https://notarystyle.com) today, I'd build it on Next.js from day one. The admin panel is more work upfront, but it eliminates years of WordPress maintenance and plugin dependency.

## Was It Worth It?

Yes. The site is faster. The admin panel does exactly what I need and nothing I don't. Deployment is automated. When Amazon broke their API, I fixed it in my own codebase instead of waiting for a plugin update.

But if your WordPress site is small and working fine, the ROI of a full rebuild isn't there. Start with custom if you can. Migrate only if the maintenance cost of staying on WordPress exceeds the cost of rebuilding.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>wordpress</category>
            <category>nextjs</category>
            <category>drizzle</category>
            <category>web-development</category>
            <category>database</category>
        </item>
        <item>
            <title><![CDATA[Writing Homepage Copy That Converts]]></title>
            <link>https://ricsmo.com/blog/homepage-copywriting-lessons</link>
            <guid>https://ricsmo.com/blog/homepage-copywriting-lessons</guid>
            <pubDate>Wed, 15 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Lessons from rewriting homepage copy for a solopreneur consulting site — what to include, what to delete, and how to think about every section.]]></description>
            <content:encoded><![CDATA[
I built [ricsmo.com](https://ricsmo.com) and [course.coach](https://course.coach) from scratch. Both sites went through multiple rounds of homepage copy. I deleted entire sections that sounded perfectly fine but didn't serve a purpose.

My background is in education and course creation, so I think about websites like curriculum design. Every element needs to earn its place. If it doesn't move the visitor toward a decision, it goes.

## Start With the Visitor, Not Yourself

Most homepage copy is written backwards. People start with what they want to say, not what the visitor needs to hear. I did this myself at first. You sit down to write your homepage and naturally start listing your credentials, your services, your story. But the visitor doesn't care about any of that yet.

The hero section is not about you. It's about the visitor's problem and whether you're the person to solve it. Every word in that section needs to answer one question: should I keep reading?

If a sentence doesn't help answer that, delete it.

## Social Proof Needs Context

I had stats in my hero section early on. "39,000+ trained" sounds great in your head, but it means nothing to someone who doesn't know what you train people in. Context has to come before numbers, or the numbers feel inflated and empty.

I moved the stats down the page where the context was already established. Same numbers, more impact.

## "Built by One Person" Backfires

I removed a "built by one person" framing from my site early on. It sounds impressive to the builder. To the visitor, it sounds like you might not have capacity to help them.

Same energy as a restaurant boasting about having a small kitchen. The customer doesn't think "dedicated artisan." They think "long wait times."

## Outcomes Over Deliverables

Service sections trip people up constantly. They state the deliverable instead of the outcome.

"I build online courses" is a deliverable. Nobody wakes up wanting to build an online course. They wake up wanting to turn their expertise into revenue.

"I help you turn your expertise into a course that generates revenue" is an outcome. That's what belongs on your homepage.

## Don't Put Your Origin Story on the Homepage

The About page is where you earn trust with your story. The homepage is where you earn attention with their problem. Don't confuse the two.

I see people dumping their origin story into the homepage hero, and it kills momentum every time. The visitor is thinking "can this person help me?" and you're answering with "here's where I went to school."

Save the story for the page where the visitor has already decided they're interested.

## One CTA Per Section

CTAs should match intent. If someone is exploring, "Learn More" works. If someone is ready, "Book a Call" works.

The mistake is stacking three competing CTAs in one section. One CTA per section. Make the choice obvious.

## Navigation Answers a Question

Navigation should answer "what else do you have?" in one scan. Don't bury your services or your contact page in a hamburger menu on desktop. I've seen consultants hide their best pages behind extra clicks.

If someone wants to hire you, make it easy to figure out how.

## Delete Good Writing

Less copy, more deliberate copy. I deleted entire sections from my homepage because they didn't move the visitor toward a decision.

Deleting good writing is hard. Keeping only the writing that earns its place is harder. But that's the job.

Think of it like editing a course. Every lecture needs to justify its existence. If a lecture doesn't move the student closer to the outcome, cut it. Your homepage works the same way.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>copywriting</category>
            <category>homepage</category>
            <category>web-design</category>
            <category>consulting</category>
            <category>conversion</category>
        </item>
        <item>
            <title><![CDATA[LinkedIn Profiles: Clients vs Recruiters]]></title>
            <link>https://ricsmo.com/blog/linkedin-profiles-written-for-clients</link>
            <guid>https://ricsmo.com/blog/linkedin-profiles-written-for-clients</guid>
            <pubDate>Tue, 14 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Why consultants need a different LinkedIn strategy than job seekers — and how to write your profile to attract clients, not recruiters.]]></description>
            <content:encoded><![CDATA[
When I started building my own training company and optimizing my LinkedIn profile, I followed the standard advice. Headline templates. About section formulas. Experience bullet points.

None of it worked.

The problem was obvious once I saw it. Most LinkedIn profile advice assumes you want a job. Every template, every format, every "pro tip" is designed for recruiters scanning for keywords.

That's the wrong frame if you're also looking for clients.

## The Audience Matters

I learned this the hard way with my own profile. Vague descriptions. No numbers. Lists of responsibilities instead of results. The same problems you see in resumes show up in LinkedIn profiles. Show me what you accomplished, not what you were assigned to do.

Clients on LinkedIn are looking for the same thing. They scan your profile for evidence that you can solve their problem, not a list of what you were responsible for at each job.

The person reading my profile is a potential buyer, not a screener. That changes everything about how you write.

A client doesn't care about your job titles. They care about whether you can solve their problem. Your "Founder of Whatever" title means nothing without context. A business owner who needs a course built doesn't care what you call yourself. What means something is whether you've built one that works and gotten results.

When you write your profile for recruiters, you get recruiter results. When you write it for clients, you get client inquiries. Same platform, different audience, completely different copy.

## The Headline Problem

Your headline should answer "who do you help and what do they get?" That's it. Not your job title. Not a clever tagline. Not "Serial Entrepreneur | Visionary Leader | Thought Provoker."

A clear statement of the outcome you deliver. Who the client is. What changes after working with you.

If someone reads your headline and can't tell what you actually do for people, the headline failed.

## The About Section

The About section is a sales page, not a biography.

I structure mine simply. A hook that names the problem my clients face. Then credibility markers with specific results. Then a description of who I work with best. Then a call to action telling them what to do next.

No long-winded origin story. No list of every job I've held since 2001. No mission statement about "passionate dedication to excellence." Just the framework that moves someone to reach out.

Hook, problem, credibility, CTA. The same structure that converts on landing pages works on LinkedIn.

## Experience Is Evidence, Not History

Your experience section should highlight results, not responsibilities. One sentence summarizing the role. Then three to five bullets with specific outcomes.

"I managed a team of instructors" is a responsibility. "Led a team of 12 instructors that launched 50 courses in 18 months" is a result.

Numbers matter. Percentages matter. Timeframes matter. The difference between "increased enrollment" and "increased enrollment 40% in one semester" is the difference between a claim and proof.

This section also creates space for keyword injection. LinkedIn's search algorithm scans your work experience descriptions. Use that space to work in terms your clients search for. Just keep the copy selling, not summarizing.

## The Platform Advantage

LinkedIn is one of the few social platforms where your audience is already in a professional mindset. They're not scrolling for entertainment or killing time. They're looking for solutions, connections, and people who can help them.

That's a massive advantage if your profile actually addresses their problems. Most profiles don't. They're digital resumes sitting in a room full of buyers.

## Multiple Audiences

Your profile needs to work for multiple audiences at once. Direct clients who might hire you. Referral partners who could send work your way. Larger companies looking for consulting engagements.

The same profile has to serve someone ready to buy and someone deciding whether to introduce you to their network. And yes, it can also serve recruiters, as long as the results-first writing works for everyone. Writing for all three isn't hard. It just means being clear about what you do and who it's for.

## Thought Leadership Without the Noise

Thought leadership on LinkedIn gets oversimplified. It's not about posting frequency. It's about building influence through understanding how people make decisions.

Post with intention, not obligation. Three posts a month that demonstrate expertise and help someone solve a problem beat daily content that says nothing. Focus on one topic at a time, one person at a time, one problem at a time.

## Rewrite for Buyers

If you're a consultant, stop writing your profile only for people who want to hire you for a job. Write it for clients too.

The difference shows up in your headline, your about section, your experience bullets, and your featured content. Every section should answer one question from the client's perspective: "Can this person help me?"

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>linkedin</category>
            <category>consulting</category>
            <category>personal-branding</category>
            <category>freelancing</category>
            <category>clients</category>
        </item>
        <item>
            <title><![CDATA[Recovering 25K Deleted Vector DB Records]]></title>
            <link>https://ricsmo.com/blog/qdrant-dedup-disaster</link>
            <guid>https://ricsmo.com/blog/qdrant-dedup-disaster</guid>
            <pubDate>Sun, 12 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How a filename-based deduplication wiped out thousands of unique records from a Qdrant vector database — and the backup strategy that saved everything.]]></description>
            <content:encoded><![CDATA[
I've been building a personal knowledge base. The idea is simple: transcribe over a thousand course videos, chunk the text, and load everything into a vector database for semantic search.

The setup uses [Qdrant](https://qdrant.tech/) running in a Docker container on my Mac. The upload script chunks each transcript into roughly 500 token segments with a 50-token overlap. Each chunk stores metadata including the filename, file path, and chunk index.

Vector databases store text as numerical embeddings. You search them with natural language questions, not exact text match. Ask "how do you handle objections in sales calls" and it returns relevant chunks even if those exact words never appear in the source material.

Everything was working well. The upload was processing files. Around 400 files into a batch of 9,879, I decided to run a deduplication script to clean things up.

## The Mistake

The dedup script checked for duplicate filenames. If two records had the same filename, it kept one and deleted the other.

Sounds reasonable. Except I have dozens of courses, and they all have files named "01-Introduction.mp4."

Same filename, completely different course content. One by one, the dedup script removed them.

The damage report: 25,666 records deleted. Most were not duplicates at all. They were unique course transcripts that happened to share common filenames with other courses.

Gone.

## How I Caught It

I noticed the collection size had dropped dramatically. What was supposed to be growing was shrinking. I checked the dedup script's logs and saw it was removing records by filename alone, with no path check.

A file called "01-Introduction.mp4" in a copywriting course and a file called "01-Introduction.mp4" in an SEO course are completely different content. The script treated them as the same file.

## The Recovery

I had a full backup export from before the dedup ran. That backup contained 3,846 points at 29.2 MB. It wasn't everything (the upload was still in progress), but it was enough to start over without losing progress.

I wiped the collection completely and began re-uploading from scratch.

The re-upload is still running. With nearly 10,000 files to process, it takes time. But every record going in is clean, verified, and won't be touched by a dedup script again.

## Three Rules I Learned

**Always dedup by a compound key.** Filename plus path. Never filename alone. Two files with the same name in different directories are almost certainly different content.

**Always backup before bulk operations.** No exceptions. I ran the dedup against my live collection without a recent backup. If I hadn't exported the collection the week before, the loss would have been permanent.

**Test on a small sample first.** I should have run the dedup script against 50 records and manually verified the results before letting it loose on the entire collection. Ten minutes of testing would have revealed the filename-only logic flaw.

## The Broader Lesson

Vector databases are powerful but unforgiving. There's no undo button. No transaction log. No way to roll back a bulk delete. Once records are removed, they're gone.

This is different from relational databases where you can wrap operations in transactions and roll back if something goes wrong. Vector databases are optimized for search performance, not for data safety.

Bulk operations against any database deserve caution. Run them against a test collection first. Verify the results by hand. Only then apply to production.

The backup strategy saved me here. I got lucky. The next time I run a bulk operation against my knowledge base, I won't rely on luck.

*If you're building systems to organize and leverage your knowledge, or if you want to talk about automating parts of your course business, [book a call](https://course.coach).*
]]></content:encoded>
            <category>vector-database</category>
            <category>qdrant</category>
            <category>knowledge-base</category>
            <category>mistakes</category>
            <category>data</category>
        </item>
        <item>
            <title><![CDATA[What I Learned From a Contract Rejection]]></title>
            <link>https://ricsmo.com/blog/contract-rejection-postmortem</link>
            <guid>https://ricsmo.com/blog/contract-rejection-postmortem</guid>
            <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[An honest postmortem on a rejected contract application — and what I'd change about my approach next time.]]></description>
            <content:encoded><![CDATA[
Yesterday I wrote about using AI to build a resume system. Today I get to talk about what happens when that system produces a rejection.

A Fortune 100 staffing agency posted a Senior Instructional Designer contract. $55-65/hr. FinTech/SaaS company. Right in my wheelhouse.

I spent real time on this one. Tailored every section of the resume. Hit 15 out of 15 keywords from the job description. Wrote a cover letter under 400 words. Ran everything through my de-ai-ification process using knowledge base material from 41 work sessions and specific writing rules.

The materials were objectively strong.

They said no.

## The Sting

The rejection stung because I knew the work was good. Not perfect, but good. Checked every box I could see. Still got filtered out.

I don't know why. Could be an experience gap I'm not seeing. Could be they found someone with direct FinTech experience. Could be timing. Could be an AI detector caught something I missed. Could be they had an internal candidate the whole time.

I genuinely don't know, and I probably never will.

## What I'd Do Differently

**Ask for feedback.** Most agencies won't give it. Some might. It costs nothing to ask, and one useful data point changes your approach more than guessing.

**Apply to more roles simultaneously.** I treated this one like it mattered. In volume, no single application matters that much. Five applications in a week beats one perfect application. I was putting too much weight on each individual opportunity.

**Stay skeptical of my own output.** I de-ai-ified the cover letter. But AI has tells that humans miss because we're too close to the text. The same way you can't proofread your own writing effectively. I'd want fresh eyes on anything I generate.

**Track conversion rates.** If I'm applying for roles and getting no responses, the problem isn't one bad application. It's a pattern. I need enough data points to identify the pattern.

## The Meta-Lesson

Rejection in consulting is normal volume business. The people who succeed send more applications, not better ones. One data point doesn't mean the system is broken. It means the system produced one data point.

Freelancers who earn six figures on platforms often send hundreds of proposals. They don't bat a thousand. They don't even bat five hundred. But they send enough volume that the wins pile up.

I was treating this application like it was special. It wasn't. It was one line item in what should be a long list of applications.

## The Framing Matters

I'm building a business, not job hunting. This contract was a way to pay bills while I grow [course.coach](https://course.coach). Keeping that framing matters.

Consulting gigs are fuel for the main thing, not the main thing itself. When I treat a rejection as a business event instead of a personal failure, it loses its sting.

One rejected contract application doesn't change my strategy. It changes my volume.

## The Best Response

The best rejection response is the next application. Not dwelling. Not over-analyzing. Not rewriting my resume for the third time. Just send the next one.

I learned that from studying successful freelancers. The ones who win aren't the ones with the best proposals on paper. They're the ones who send the most proposals and learn from the results.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>freelancing</category>
            <category>job-search</category>
            <category>resumes</category>
            <category>consulting</category>
            <category>rejection</category>
        </item>
        <item>
            <title><![CDATA[Client Communication Habits Pros Use]]></title>
            <link>https://ricsmo.com/blog/client-communication-habits</link>
            <guid>https://ricsmo.com/blog/client-communication-habits</guid>
            <pubDate>Fri, 10 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How top freelancers communicate with clients — response time, proposals, status updates, bad news, and the consultation call.]]></description>
            <content:encoded><![CDATA[
I've spent a lot of time studying what makes top freelancers tick. Not the influencers selling the dream. The actual people quietly earning six figures from a handful of clients.

One pattern shows up over and over. It's not about having the best portfolio or the lowest price. It's how they communicate.

Communication is the skill nobody teaches freelancers. Your portfolio gets you in the door. Your communication keeps you in the room.

## Speed Matters More Than You Think

On freelance platforms, fast response times correlate directly with higher rankings and more repeat clients. The top sellers respond within an hour during business hours.

This isn't about being available 24/7. It's about being responsive during the hours your clients are actually working. If your clients are on the East Coast and you're on the West Coast, you need to be checking messages during their morning, not yours.

Slow responses signal that you're not prioritizing the work. Fast responses signal reliability before you've delivered a single thing.

## The First Message Sets Everything

A vague response like "I can do that, let me know when to start" loses the client to someone who took thirty extra seconds to be specific.

The winning message looks like this: "I've done similar work for online course creators. Here's my approach. I can start Thursday. Here's what I need from you." You're demonstrating competence in your very first interaction. You're showing you understand their situation. You're making it easy for them to say yes.

Compare that to the generic response and the difference is obvious. One says "I read your message." The other says "I read your message and I already know how to help."

## Specific Proposals Beat Vague Ones

Generic proposals get ignored. Specific proposals get responses.

Include scope, timeline, deliverables, and price. Don't leave gaps for the client to fill in with their imagination. That imagination usually works against you.

When you say "I'll deliver a course redesign" you're leaving too much open. When you say "I'll redesign your course with updated structure, new assessments for each module, and a revised instructor guide, delivered in three weeks, for $2,500" you're giving them something real to evaluate.

Specificity also prevents scope creep. When the client asks for something outside the proposal, you can point to what was agreed on. "That's outside the scope we discussed. I can add it for an additional $500." No awkwardness, no resentment.

## Bad News Delivered Early Beats Bad News Delivered Late

If a project is going to miss a deadline, tell the client now. They'll be annoyed. They'll be furious if they find out the day before.

I've seen freelancers lose long-term clients over a single delayed project where they hid the problem. The clients didn't fire them for being late. They fired them for being surprised.

Early bad news gives the client options. They can adjust their own timeline, reprioritize deliverables, or allocate resources differently. Late bad news gives them only one option: question whether they can trust you again.

The same applies to scope issues. If you realize halfway through that the project is bigger than estimated, say so. "This is turning out to be more complex than we planned. Here's why, and here are two options for how to handle it." Clients respect honesty. They resent surprises.

## Over-Communicate on Purpose

Send a weekly status update even when there's nothing to report. "Everything is on track, here's what I finished this week" takes thirty seconds and prevents the client from wondering if you forgot about them.

Silence makes clients nervous. They start imagining problems that don't exist. A quick update kills that anxiety before it starts.

The format doesn't matter much. An email, a message on the platform, a quick call. What matters is consistency. Same day each week, same level of detail. The client learns to expect it and stops worrying.

The clients who feel taken care of are the clients who come back.

## The Consultation Call Is Not a Sales Call

Top freelancers treat it as a discovery session. Ask questions. Understand the client's problem. Propose an approach.

The pitch happens naturally when the client sees you understand their situation. You don't need to convince them. You need to demonstrate that you get it.

If you walk into a consultation call thinking about closing, the client can feel it. Walk in thinking about understanding, and the closing takes care of itself.

The best consultation calls end with the client asking "so how do we get started?" That happens when they feel heard, not when they feel sold to.

## Saying No Is a Power Move

"I don't think this project is a good fit for me" signals confidence and expertise. Clients respect freelancers who turn down work that isn't in their wheelhouse.

It also prevents the nightmare project that takes twice as long and ends with a bad review. Every experienced freelancer has a story about the project they should have walked away from. Learn from those stories.

The interesting thing is that turning down work often leads to more work later. The client you said no to might refer you to someone else. "They were honest about what they could and couldn't do. That's rare. Call them." Referrals like that are worth more than any single project.

## Follow Up After Delivery

After you deliver the work, check in. "How's it working out? Anything you need adjusted?"

This turns a one-time project into a repeat client. It shows you care about the outcome, not just the paycheck. Most freelancers disappear after getting paid. Being the one who doesn't puts you in a different category entirely.

## Use Templates, But Customize Them

Have templates for initial contact, proposals, status updates, and project wrap-up. Customize them for each client. Don't send the same message twice, but don't write every message from scratch either.

Templates ensure you don't forget important details. They also save you mental energy for the actual work. The customizing takes five minutes. Writing from scratch takes twenty.

## Clarity Beats Charm

The clients you want to work with respond to clarity, not charm. Be specific about what you'll deliver, when you'll deliver it, and how much it costs.

That's it. No personality contest. No trying to be the most likable person in their inbox. Just clear, professional communication that makes their life easier.

The amateurs are still trying to be clever. The pros are trying to be clear.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>freelancing</category>
            <category>clients</category>
            <category>communication</category>
            <category>consulting</category>
            <category>business</category>
        </item>
        <item>
            <title><![CDATA[An AI Resume System That Sounds Human]]></title>
            <link>https://ricsmo.com/blog/why-ai-resumes-get-ghosted</link>
            <guid>https://ricsmo.com/blog/why-ai-resumes-get-ghosted</guid>
            <pubDate>Fri, 10 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How to build an AI resume and cover letter system that sounds human — trained on your own experience, powered by Claude, no paid tools needed.]]></description>
            <content:encoded><![CDATA[
I've been looking for part-time consulting opportunities in instructional design and course creation. Not traditional employment — I'm building my own course creation consulting business. But consulting gigs, contract work, and project-based roles that pay the bills while I grow things.

Like most people in 2026, I use AI to help with application materials. And I quickly ran into the same problem everyone else runs into: the output sounds like it was written by a machine.

The difference is, I spent enough time staring at AI-generated text that I started recognizing the patterns. I catalogued them. Then I built a system that eliminates them. Now I feed a job description to Claude and get back a tailored resume and cover letter that reads like a person wrote it.

Here's the problem, the patterns, and the system.

## The Problem With AI Resumes

Hiring managers are drowning in applications. When they see cover letter number 47 that starts with "In today's rapidly evolving landscape," they stop reading. Not because the content is bad, but because it's obviously templated. It says "I didn't care enough to write this myself."

This problem gets worse when you understand how proposals and applications are actually evaluated. The person reading your application is overwhelmed. They're scanning dozens of submissions that all look the same. Most freelancers and applicants copy-paste the same generic content. The ones who stand out are the ones who sound like they actually read the job posting and understood the problem.

AI resume tools have the opposite problem built in. You paste your resume, paste a job description, click generate, and get something that hits all the keywords but reads like every other AI-generated application in the pile. The tools optimize for keyword matching. They don't optimize for sounding human.

## The AI Tells

After building dozens of applications with AI, I identified the patterns that give it away.

### Em Dash Overload

AI loves em dashes. It uses them to cram multiple ideas into one sentence, creating a breathless reading experience.

AI version:
> "I led a team of 12 instructors — managing schedules, curriculum development, and student outcomes — while also overseeing the college's first fully online degree program — a $2M initiative that increased enrollment by 40%."

Three em dashes in one sentence. Read it out loud. It's exhausting.

After fixing:
> "I led a team of 12 instructors. We managed schedules, curriculum development, and student outcomes. I also oversaw the college's first fully online degree program, a $2M initiative that increased enrollment by 40%."

Same information. Three sentences instead of one. You can breathe between them.

If you count more than two or three em dashes in a cover letter, that's a red flag.

### Transition Word Salad

"Moreover," "Furthermore," "Additionally," "Nevertheless." AI stacks them at the start of paragraphs.

AI: "Additionally, I have experience with multiple LMS platforms. Furthermore, I hold a CompTIA Security+ certification."

Human: "I've worked across multiple LMS platforms and hold a CompTIA Security+ certification."

Most human writing doesn't transition at all. We just start the next paragraph.

### Rhetorical Questions

"Looking for an instructional designer who can bridge the gap between technology and pedagogy? With 20 years of experience..."

Nobody talks like this. A human cover letter just starts making its point.

### Hedging

AI is allergic to direct statements. Everything is "I was responsible for" or "I contributed to" or "I played a role in."

- "I was responsible for a team that launched 50 courses" (AI)
- "I led the team that launched 50 courses" (Human)

### The Three-Example Rule

Whenever AI lists examples, it almost always gives exactly three. Not two, not four. Three. Check any AI-generated document and you'll see it.

### Corporate Jargon

"use" becomes "utilize." "help" becomes "facilitate." "improve" becomes "optimize." Real people don't talk like that.

## My System

Once I knew the patterns, I built rules to prevent them. But rules alone aren't enough — AI also needs context about your actual experience. Generic advice like "focus on achievements" doesn't help when the AI has no idea what your achievements are.

Here's how I set it up.

### Step 1: Build a Knowledge Base

I'd been working with an AI assistant (Craft Agent) across dozens of work sessions. Course creation, web development, automation projects, consulting work. All of it captured in session histories.

I extracted everything from those 41 sessions — tools used, projects completed, problems solved, technologies implemented — and compiled it into a single experience file. Think of it as a detailed, factual resume that no human would ever write, but that gives the AI the raw material to work with.

That file lives in my AI assistant's workspace. Whenever I start a new application, it already knows everything I've done.

### Step 2: Write the Rules

I created a set of rules for how resumes and cover letters should be written. Not just "avoid AI patterns," but specific formatting and writing guidelines:

- **ATS format.** No tables, no columns, no fancy layouts. Plain text sections with clear headings. ATS systems parse these reliably.
- **No em dashes.** Replace with periods, commas, or parentheses. Max two per document.
- **Ownership verbs.** "Led," "built," "launched," "managed," "designed." Not "was responsible for" or "contributed to."
- **No transition words at paragraph starts.** Just start the paragraph.
- **No rhetorical questions.** Especially not in cover letter openers.
- **Varied list lengths.** Don't default to three examples every time.
- **Plain English.** "Use," not "utilize." "Help," not "facilitate."
- **Specific numbers.** "$2M initiative," "39,000 professionals trained," "50 courses launched." Not "significant impact" or "substantial growth."
- **Character limits.** Cover letters: 400 words max. Application questions: always check if the limit is words or characters. Don't guess.
- **No hallucinated experience.** If it's not in the experience file, it doesn't go in the resume. Period.

### Step 3: Provide Writing Samples

Rules tell the AI what not to do. Samples show it what good looks like.

I took cover letters and resume sections that I'd already revised to sound natural and saved them as reference material. When the AI writes a new application, it has examples of the tone, sentence structure, and level of detail I expect.

### Step 4: Feed It a Job Description

Now the workflow is simple. I find a job posting, paste the job description, and say something like:

> "Write a resume and cover letter for this position. Use my experience file for content. Follow the resume-builder rules. Match the tone of the writing samples."

What comes back is a tailored application that uses my actual experience, hits the job description's keywords, and reads like a person wrote it. Not perfectly — I still review and tweak. But the first draft is usually 90% there.

## Why This Beats Paid AI Resume Tools

There are dozens of AI resume builders out there. Resume Worded, Jobscan, Teal, Kickresume. They all have the same fundamental problem: they're optimizing for the wrong thing.

They optimize for ATS keyword matching. Which matters. But they're all using the same underlying models with the same default writing style, so they all produce output that sounds the same. And hiring managers can tell.

There's also a pricing problem most people don't think about. A lot of the advice you see online tells you to charge premium rates from day one. "Your skills are worth $5,000, never accept less." That sounds great in a course, but it doesn't work when you're starting out and no one knows you yet. The realistic approach is to be flexible with pricing early on, invest in building up reviews and a track record, then raise your rates as demand increases. The same principle applies to your resume: the goal isn't to sound expensive on paper. It's to get hired so you can prove your value. Then you're in a position to negotiate from strength.

My system has three advantages:

**It's trained on my experience.** Not generic templates. When it says I "launched a $2M online degree program," that's because I actually did that. The specifics come from my knowledge base, not from a prompt that says "add impressive-sounding achievements."

**It costs nothing extra.** I use my existing Claude subscription. The paid AI resume tools charge $20-50/month for access to the same models with worse prompts. Why pay for a wrapper around ChatGPT when you can build a better system yourself?

**The output actually passes as human.** Because I've explicitly trained it to avoid the patterns that AI resume tools leave in by default. The de-ai-ifying is baked into the rules, not bolted on as an afterthought.

## The Workflow in Practice

Here's what applying for a job looks like for me now:

1. Find a job posting that fits
2. Copy the job description
3. Paste it into my AI assistant with a one-line instruction
4. Get back a tailored resume and cover letter
5. Review for accuracy (did it hallucinate anything?), tweak tone if needed
6. Export and submit

Total time per application: under 5 minutes, including review. The resume tailors my experience to the specific role. The cover letter addresses the job's requirements directly. Neither one sounds like a robot wrote it. And the output isn't plain text — the system generates properly formatted DOCX and PDF files ready to submit.

## One More Thing: First Impressions Matter

A good system produces good documents. But there's a meta-lesson here that's worth mentioning.

The best proposal in the world gets ignored if the first line doesn't grab attention. When someone is scanning a stack of applications, they see maybe the first two or three sentences before deciding whether to keep reading. Most people waste that space with generic openers: "I am writing to express my interest in..." or "I came across your job posting and..."

The applications that get replies are the ones that start by showing they actually read the posting. Mention something specific from the job description. Ask a clarifying question. Reference the company or project by name. Anything that signals "I read this" instead of "I copy-pasted this."

My system handles this automatically because it has the job description as input. But the principle applies even without AI: the first line of your application should prove you read the job posting. Everything else is secondary.

If you're applying for roles and using AI to help, stop pasting your resume into generic tools and hoping for the best. Build a knowledge base, write some rules, save some samples of writing you're proud of, and give the AI something real to work with. The technology is good enough to produce human-sounding output. You just have to tell it what "human" means.

*If you're building a course or need help with instructional design, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>resumes</category>
            <category>cover-letters</category>
            <category>AI</category>
            <category>job-search</category>
            <category>freelancing</category>
            <category>consulting</category>
        </item>
        <item>
            <title><![CDATA[Claude vs Copilot for AI Coding]]></title>
            <link>https://ricsmo.com/blog/claude-vs-copilot-coding</link>
            <guid>https://ricsmo.com/blog/claude-vs-copilot-coding</guid>
            <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical comparison of Claude and GitHub Copilot for AI-assisted development — when to use each, and why context window size changes everything.]]></description>
            <content:encoded><![CDATA[
GitHub Copilot is still the most widely used AI coding assistant among developers. A January 2026 JetBrains survey of over 10,000 developers found 29% use Copilot at work. But Claude Code is growing fast, now tied with Cursor at 18% adoption, and Claude has the highest satisfaction rating of any AI coding tool at 91%.

I default to Claude. Copilot is my secondary tool.

## The Short Version

Claude handles larger context better. When you're working on a project with dozens of files and need the AI to understand the full picture, Claude can hold more context in a single conversation. Copilot is better at quick inline suggestions within a single file.

That's the core difference. Everything else flows from it.

## My Workflow

I use [Craft Agents](https://agents.craft.do/) (powered by [Claude](https://claude.ai)) for architectural decisions, multi-file changes, debugging across files, and writing new features. I use [Copilot](https://github.com/features/copilot) for boilerplate completion, small repetitive patterns, and quick inline fills while typing.

In a typical session, I'll start in Craft Agents with a description of what I want to build. Craft Agents reads the relevant files, understands the project structure, proposes an approach, and implements the changes across multiple files. Then I switch to my editor with Copilot running for the finishing work. Typing out a component, filling in prop types, writing test cases. Tab, tab, tab.

They're not competitors. They serve different purposes. Using only Copilot for everything is like using a screwdriver for every home repair. Sometimes you need a screwdriver. Sometimes you need a drill.

## Why Claude for Big Work

Copilot's strength is speed. You're typing a function, it suggests the next three lines, you hit tab, done. That's useful for boilerplate. It's not useful for "redesign the component architecture to support pagination."

For that kind of work, you need the AI to understand multiple files simultaneously. The component file, the page file, the routing config, the type definitions, the data fetching logic. Copilot sees one file at a time. Claude can hold the entire context in a single conversation.

Craft Agents runs Claude as a full agent with tool access. It can read files, search the codebase, run commands, and make edits across multiple files in a single turn. Copilot can't do that. Copilot is a completion engine. Claude through Craft Agents is an autonomous assistant.

A concrete example: I added pagination to my blog. The task touched the blog index page, a new paginated page component, a pagination UI component, the blog utility library, and the sitemap generator. Craft Agents understood all of those files, proposed the architecture, implemented each change, and ran the build to verify. Copilot couldn't have done that. It can only suggest what comes next in the file you're currently editing.

## Why Copilot for Small Work

Where Copilot excels is the stuff you do hundreds of times a day. Writing a function signature and having it fill in the body. Typing a CSS class and getting the full rule suggested. Creating a new component and having the import statements auto-complete.

That sounds small. It's not. Over the course of a day, those micro-completions save a lot of keystrokes. Copilot is fast and it's always there. No prompting, no context switching. Just type and tab.

I also use Copilot Chat in VS Code for quick questions about a specific file. "How does this function work?" "What are the edge cases here?" Claude could answer those too, but Copilot is already in my editor. No tab switching, no new window.

## Other Tools in the Stack

Claude and Copilot aren't my only AI tools. I also use ChatGPT for quick questions and general knowledge, Gemini for research, Z.ai GLM models and OpenRouter for accessing different models, and local Llama models for privacy-sensitive work where I don't want to send data to an API.

Different tools for different problems. I don't have loyalty to any particular model. I have loyalty to whatever gets the job done fastest.

## Context Window Is the Real Differentiator

The reason Claude wins for project-level work comes down to context window size. Copilot operates on the file you're currently editing, plus whatever context it can pull from open tabs. Claude can ingest an entire project's worth of code in a single conversation.

When the task is "fix this function," context window doesn't matter. When the task is "build a new feature that touches eight files across three directories," context window is everything.

This is also why I keep transcripts of my work sessions in a knowledge base. When I start a new session, I can reference previous work without re-explaining the project history. The context carries forward.

## The Data

In case you're wondering whether I'm the outlier here: Claude adoption has grown 6x among developers in the past eight months. It now ties with Cursor as the second most-used AI coding tool. And among developers who use Claude Code, 91% are satisfied with it. That's the highest rating of any AI coding tool on the market.

Copilot still leads in total users, and it's a solid tool. But the trend is moving toward reasoning-focused tools over pure completion engines. I'm just ahead of the curve.

## This Will Change

Both tools are improving fast. Copilot is getting better at multi-file awareness. Claude is getting faster at inline completion. In six months the comparison might look different.

Right now, Claude handles reasoning and multi-file work better. Copilot handles speed and inline completion better. My workflow reflects that split.

The people who struggle with AI coding tools are the ones who pick one and try to use it for everything. That's like picking one programming language and insisting it's the best for every use case. Use the right tool for the right task.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>ai</category>
            <category>claude</category>
            <category>copilot</category>
            <category>development</category>
            <category>tools</category>
        </item>
        <item>
            <title><![CDATA[Three-Phase Pricing for Freelancers]]></title>
            <link>https://ricsmo.com/blog/three-phase-freelance-pricing</link>
            <guid>https://ricsmo.com/blog/three-phase-freelance-pricing</guid>
            <pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A realistic freelance pricing strategy in three phases — from track record building to premium rates, based on what top earners actually do.]]></description>
            <content:encoded><![CDATA[
Before launching my consulting business, I spent weeks researching successful freelancers. I wanted to know how they actually priced their work, not how they sold pricing advice to beginners.

The most common tip I found was "charge what you're worth from day one." That advice exists to sell courses, not to help you get work. When nobody knows who you are, quoting $200 an hour gets you zero clients and zero reviews.

I kept digging and found a clear pattern. The top earners didn't start premium. They followed a three-phase progression.

## Phase 1: Build a Track Record

Accept lower rates to get your first 10 reviews. This isn't undervaluing yourself. It's investing in proof.

Every completed project and positive review is an asset that pays dividends later. You need evidence before you can command premium rates. Buyers on freelance platforms are risk-averse. They don't want to be your first client. They want to see that other people trusted you and got good results.

The goal in Phase 1 isn't maximizing revenue. It's maximizing completed projects and positive reviews. If that means pricing low enough that buyers take a chance on an unknown seller, that's the play.

The hardest part is psychological. You know what your time is worth in your head, and charging less feels wrong. But reviews are the currency that buys you higher rates later. Every five-star review is worth more than the difference between your Phase 1 rate and your Phase 3 rate.

## Phase 2: Raise Rates Selectively

Once you have reviews and a completion history, you have leverage. Start being selective about which projects you take.

Raise prices on new clients while keeping existing ones at their original rate. You don't have to announce a price increase to everyone at once. The new rate applies to the next person who messages you, not the client you've been working with for three months.

This phase is where you start feeling the difference. Clients come to you with projects and you get to decide whether the rate is worth your time. Some you take. Some you pass on. That selectivity signals to the platform's algorithm that you're in demand.

The top earners I studied didn't raise rates once and stop. They raised them incrementally, sometimes every few weeks, as their review count grew. Small increases are less noticeable to buyers but compound quickly for the seller.

## Phase 3: Set Fixed Rates

When clients seek you out instead of you finding them, you've arrived. Now you set rates and clients accept or pass. No more negotiating from a position of need.

This phase happens faster than people think. Not years. Months. Some of the top earners I researched went from their Phase 1 rate to their Phase 3 rate in four to six months.

The difference between Phase 2 and Phase 3 is inbound vs. outbound. In Phase 2, you're still sending proposals. In Phase 3, clients find you through search results, repeat business, and referrals. You still send proposals for new opportunities, but you're not dependent on them.

## The Key Metric

The key metric isn't your hourly rate. It's your review count and completion rate.

Platform algorithms weight reviews and completions heavier than price. A seller with 50 reviews and a 100% completion rate at $40 an hour outranks a seller with 3 reviews at $150 an hour. Every time.

This is why the "charge premium from day one" advice fails in practice. A $150 seller with no track record is invisible in search results. A $40 seller with 30 reviews is on the first page. Guess who gets more clients.

## Hourly vs. Project Pricing

Both models have their place. The mistake is using the wrong model for the wrong type of work.

Hourly pricing works for consulting and open-ended work. When the scope isn't fully defined, when the client needs ongoing support, or when the work involves collaboration and iteration, hourly makes sense. The client pays for your time and expertise, and you don't get punished for scope creep.

Project pricing works for defined deliverables. When you know exactly what the client needs and how long it takes, project pricing is cleaner for everyone. A course build with 10 modules is a project. An ongoing consulting engagement is hourly.

The trap is pricing a project too low because you estimated wrong. That's why you only offer project pricing on work you've done before. If you've never built a course for a financial services client, don't quote a fixed price. Quote hourly until you understand the scope.

## My Plan

For my own launch on Upwork, I'm planning to start at $95 to $115 an hour. Not because that's my "worth" but because it's the entry point for my niche that balances getting work with leaving room to grow.

That range is below what I'd consider my "real" rate for consulting work. But on a platform where review count matters more than rate, the priority is getting those first 10 completed projects. Once I have the track record, the rate goes up.

## Never Discount Without a Reason

Never apologize for your rates. Never discount without getting something in return, like faster delivery or expanded scope. Discounting trains clients to negotiate every time.

If a client asks for a lower rate, respond with a question. "What would you like to adjust to make that work? I can reduce the scope, shorten the timeline, or remove a deliverable." Make the trade-off explicit. Either they pay full price for full scope, or they get less for less. You don't give away work for free.

## Rate Is Not Self-Worth

The people who struggle most with pricing are the ones who attach their self-worth to their rate. They hear "your rate is too low" and feel insulted. They hear "I can't afford that" and feel rejected.

Your rate is a business decision. It's based on market demand, your track record, and your growth strategy. It has nothing to do with your value as a person.

Charge what the market will bear given your current position. Build reviews, raise rates, repeat. That's the system. It works.

*If you're building a consulting business and want help standing out, that's what I do. [Get in touch](https://course.coach).*
]]></content:encoded>
            <category>freelancing</category>
            <category>pricing</category>
            <category>upwork</category>
            <category>consulting</category>
            <category>business</category>
        </item>
    </channel>
</rss>