The information world is exploding with new technology and devices of all sizes and shapes, bringing with it unprecedented growth in demand for new information products and the content to drive them.

Keep Up or Fade

Today, publish transparently across different types and sizes of output devices, and tailor your information to each user group’s specific needs and desires, or you risk becoming old hat. But as consumption grows, the burden on content creators and managers grows with it, forcing publishers to scramble for answers.

If that describes you, one answer may come partly from something called “single source publishing” or “SSP.”

Enter Single Source Publishing

Single source publishing, vendor hype surrounding it notwithstanding, is a broad approach to content aimed at enabling creation of information products targeted to specific audiences, automatically -- and without manual intervention or rework. Although decades old, the techniques of SSP have never been viewed with the urgency that they evoke today.

SSP is often synonymous with “component content management” or componentization: breaking your content into small pieces for collection in many different ways to create different desired outputs, like Scrabble where one letter per piece allows you to spell any word.

Standards often imply componentization as well: DITA is designed to manage and collect components (or “topics”), and technical standards like S-1000 also use componentization in their architectures.

So, for better or worse, any discussion of SSP is partly a discussion of componentization:

Components: How Big vs. How Many

Don’t get me wrong, componentization works: it has many adherents and a long history; it’s enshrined in standards; and it’s supported by a growing body of software. But it’s not without its limitations, a big one being “fragmentation,” the two-headed monster of component size and volume.

If you need highly granular output, your components must be highly fragmented, providing the flexibility you need but often creating a blizzard of tiny components difficult to author and maintain. With larger components, you have fewer pieces to manage but face increasingly redundant content that must be revised everywhere it occurs.

Either way, the process works but is often less efficient than it should be. Worse yet, you may not know you’re in trouble until your system is in operation and decisions are made that are difficult to undo.

Finding the Component Size Balance

The solution, in a perfect world, is having the needed output granularity without greater fragmentation or redundant content. Two techniques often overlooked in planning and implementation may hold the key.

The first, sometimes called “variant tagging” involves tagging within individual XML components making them usable in multiple contexts. If an element appears with some differences in output variants A, B and C, the original element may be tagged to allow its filtering for each output usage.

DITA provides for this capability through “conditional processing” and some commercial tools support its use. Likewise, S1000d provides a similar capability under its “applicability” tagging.