Skip to main content

Rendering Content

NarratorAI outputs text and optionally saves it to a file of your choice, so you can render it how you like from there. If you're using React you may choose to use the @narrator-ai/react package to get some nice features out of the box:

npm install @narrator-ai/react

I use @narrator-ai/react in various places in my personal blog so I created a little NarrationWrapper that I can easily reuse and re-configure depending on the context. Here's my actual NarrationWrapper:

import { Narration } from "@narrator-ai/react";
import NarrationMarkdown from "./NarrationMarkdown";

const sparkleText = "This summary was generated by AI using my narrator-ai npm package.<br /> Click to learn more.";

export function NarrationWrapper({
id,
title,
className,
titleClassName,
}: {
id: string;
title: string;
className?: string;
titleClassName?: string;
}) {
return (
<Narration
title={title}
id={id}
className={className}
titleClassName={titleClassName}
sparkleLink="/about/ai"
sparkleText={sparkleText}
showActions={process.env.NODE_ENV === "development"}
>
<NarrationMarkdown id={id} />
</Narration>
);
}

This simple React component accomplishes a few things:

  • Lets me pass in a title ("7 posts tagged ai" in this case)
  • Shows me buttons to regenerate, mark as good/bad generation only when I'm developing
  • Configures an "AI Sparkle" with some text to let the reader know this section was AI generated
  • Delegates the actual rendering of the content to NarrationMarkdown
  • Lets me pass in optional className/titleClassName props to control styling

And here's what it looks like when rendered:

NarrationWrapper screenshot

My actual NarrationMarkdown component looks like this:

"use server";

import { narrator } from "@/lib/blog/TaskFactory";
import { MDXRemote } from "next-mdx-remote/rsc";

async function NarrationMarkdown({ id }) {
const content = narrator.getNarration(id);

if (!content) {
return null;
} else {
return <MDXRemote source={content} />;
}
}

export default NarrationMarkdown;

That just grabs the narrator instance we created earlier, and uses its getNarration function to fetch the content itself, rendering it using the excellent MDXRemote library. In this case we're able to render everything using React Server Components, but you could do it the old useEffect way if you wanted.

Narration Provider

To take advantage of the content regeneration and training buttons present in the Narration UI, we need to wrap our UI in a Narrator context, which provides the functionality to regenerate/train when we click the buttons. The easiest way to do that is via createNarrator:

//this file is called ./providers/Narrator.tsx, but call it what you like
import { createNarrator } from "@narrator-ai/react";
import { regenerateNarration, saveExample } from "../actions/narration";

//this just creates a React context provider that helps Narrator perform regeneration & training
const NarratorProvider = createNarrator({
actions: {
saveExample,
regenerateNarration,
},
});

export default NarratorProvider;

Now we just place that Narrator provider into our app layout, such that all of our usages of Narration or NarrationWrapper are children of it:

import NarratorProvider from "./providers/Narrator";

export default function layout({ children }) {
return <NarratorProvider>{children}</NarratorProvider>;
}

Server Functions

The provider we created using createNarrator took a couple of optional arguments, regenerateNarration and saveExample. Here's an example of how you might choose to implement those. The regenerateNarration function below uses the Vercel AI SDK along with MDXRemote to stream rendered markdown content back to the browser. You could also just have it return a string, but streaming is where it's at:

'use server';

import { TaskFactory, narrator } from '@/lib/blog/TaskFactory';
import { createStreamableUI } from 'ai/rsc';
import { MDXRemote } from 'next-mdx-remote/rsc';
import { Spinner } from '@narrator-ai/react';

//this is fancy... it returns a streaming MDXRemote component so that you can see the new content
//stream in instead of waiting. You could also just return a string - Narrator supports that too
export async function regenerateNarration(docId: string) {
const editor = await TaskFactory.create();
const ui = createStreamableUI(<Spinner />);

(async () => {
const textStream = await narrator.generate(editor.jobForId(docId), { stream: true, save: true });
let currentContent = '';

for await (const delta of textStream) {
currentContent += delta;
ui.update(<MDXRemote source={currentContent} />);
}

ui.done(<MDXRemote source={currentContent} />);
})();

return ui.value;
}

//this gets called if you click the thumbs up/down buttons in the UI
export async function saveExample(example) {
return await narrator.saveExample(example);
}

Now we can click the regenerate button to our heart's content in the UI, and see new content generations streaming in:

Narration regeneration via the UI