DCL/Conditional Content Part 2: Making Deep Changes

When Conditional Content Goes Wild – Part 2: Securing Support and Making Deep Changes

By Noz Urbina, Founder and Content Strategist, Urbina Consulting

I’m back for part 2 of this post, having been neck-deep in preparations for Congility 2014. This time I am going to give you a very simple introduction to the issues followed by much more complex tips following on from last time. I have also got a special discount offer for DCL readers (skip to the last heading if you’re in a hurry!).

Conditional Content Overload – for managers and other non-technical communicators

In the previous instalment, I gave a very standard definition of conditional content. The type that most professional communicators would be able to follow easily, will have heard, or even have used many times. This time, I’m going to give you a more manager-friendly, illustrative way of going through the issue in a way that doesn’t require hands-on experience to appreciate.

I have found this explanation to be a useful tool for teams who are trying to secure:

    1) The empathy of managers and colleagues regarding what they have to deal with
    2) The budget and organisational will to do something about it.

It may seem simple, but by building from the ground up, you’ll be sure everyone in the room understands what you’re talking about. Don’t underestimate the value of this kind of empathy. Understanding can “buy” you more time to work, or more respect and appreciation when you say, that, yes, it does take that long to make a few simple changes and “debug” the content to make sure it all comes out OK at the other end. I’ll be providing the source PowerPoint images for the visualisations on a Creative Commons Attribution License, so you’re free to use and adapt them under those terms.

Often writers using conditional content are siloed off from other staff. Your tools and processes can be arcane and cause your managers and colleagues’ brows to wrinkle when you try to start explaining them. This is not a good thing. So, let’s start from scratch.

What is a conditional content?

Let’s start by illustrating the basic structural idea of a document. Anyone working in a business environment will recognise a web or print deliverable as a collection of sections1 wrapped up in something – some sequence or package – that fulfils a need. I visualise it like this:


If we want deliverables that have audience-specific information, we will need to do one of two things:

    1) Put their content together so that multiple groups see everything (one size fits all – whether they like it or not)
    2) Provide each audience profile with their own personalised copy.

One size fits all deliverables

The first option is cheapest for the organisation and therefore is often what ends up being done. I described this in the previous part of this post. If you’re already using conditional content, then you can probably skip to the section titled “Providing personalised deliverables”.

If you are putting together one size fits all deliverables, you might visualise them like this:

One Size Fits All Deliverable

Most of us have read (and some written!) documents like this, and sympathise with users who are having a harder time consuming content just so that the content provider can provide it more easily or cheaply.

Providing personalised deliverables

Providing every target audience with their own copy also presents a decision that has to be made. You’ll need to:

    1) Create copies by hand and synchronise them manually (copy and paste)
    2) Create a master copy and allow a machine to filter for us.

Most businesses, especially those in countries where staff costs are high, will try to opt for the second option. This leads us into a simplified explanation of how we reuse content using conditions suited to our managers and colleagues.

Conditional content is:

  • Using special marks2 on certain pieces of information for a specific audience, e.g. owners vs maintainers of a specific product (use whatever profiles will resonate with the people you’re presenting to)
  • The use of an automatic process to filter it, so users need only read what is relevant for them

The benefit is that it saves effort, time and cost, by making what we write more reusable. This is because there is only one source file for multiple outputs that we deliver.

The visualisation for this idea is nearly the same as in my last post. You should replace “Whatever you make” with the name of a specific deliverable (website, datasheet, manual, microsite, etc.) that the people you’re addressing can relate to.

Multiple Outputs, Few Marks

Explaining the pain

The important part is really making clear why this approach is problematic. The problem is the same as the benefit: you only use one file for multiple outputs. Eventually, that file becomes unwieldy and bloated with these special marks, making it very hard to review, edit, or update. I visualise it like this:

Multiple Outputs, Many Marks

After they have seen the conceptual illustration, you can wow the uninitiated with some screenshots. Try to show them:

    1) What the authoring interface looks like, with all its tags and ugliness on full display (chose a nice, meaty example from your real-life work)
    2) What that looks like in output (usually far simpler and recognisable)
    3) The list of current options you have to choose from when selecting conditional profiles (showing them the dialogues from your authoring tool with dozens of choices is often suitably scary. Also, examples or horror stories of what has gone wrong or nearly gone wrong in the past. Something like "We nearly sent out content from our internal service guide to external customers which would have leaked valuable IP")

Then tell them how much time you spend dealing with these conditions rather than actually writing.

I have – and you can quote me on this – many clients whose authors spend more than half their time working with conditional tags, rather than the words inside them. Those with 50 conditions or more lose even more than half of their time. That means the organisation is funding a content creator who is only able to contribute a minority of their time to value-added work of actual content creation.

Finally, if it’s applicable to you, tell them how many condition options you’re planning to add in the near future.

What you’re trying to get out of all this is a non-ambiguous statement:

“To get to output X, I have to deal with source content Y. That takes me far too long, will get worse, and the risk of errors increases as we pile on the audiences and outputs”.

Two more advanced tips for getting around conditional content

Now we leave behind the world of “for managers” and jump to the opposite extreme with some more in-depth tips on avoiding conditional content. These tips are not alternatives to winning budget, because they will take time and effort to implement, and may require investment in some outside assistance. They should provide you with some self-learning starters and food for thought and you can always get in touch if you need more help.

Model for progressive disclosure/reduction

Luke Hay wrote a concise article on progressive disclosure and reduction and nailed the title: “How to give users less, but give them more”. Although it was born in user interface design, progressive disclosure works brilliantly on content to give users a superior experience with a smaller amount of content (presented at any one time).

If you design your content structures so that certain parts of your content model are intended for certain scenarios, outputs or possibly audiences (if you’re not familiar with the term, a content model is a structural template for specific types of content like a task or a reference) you can use structure, instead of conditions, to control reuse.

This article isn’t DITA-specific, but there are some useful ideas in the DITA standard which are worth learning about. In DITA there are parts of the “Task” content model called a “Short Description” and “Step Info”. If used judiciously with the right authoring guidelines, these might be all you need to mark up content for experts vs. novices. Short Descriptions and Step Info might be hidden for advanced users, providing “progressive reduction” as Hay describes.

Similarly, if you have special elements of your content model for screen shots, short-cut keys, sample code, or other specific meaningful structures, you may be able to define rules that control who they go to, what deliverables or what interface types (e.g. mobile vs. embedded vs. desktop). For example you might specify that sample code and screenshots are never shown in the online help, only in the PDF output. On mobile display, maybe you’re only showing short descriptions and users have to click for full topics.

Modelling for progressive disclosure allows you to use the built-in structure of your content to define audience and output mappings, rather than needing to extensively mark-up an otherwise undifferentiated flow of text using traditional conditions.

Model using multiple deliverable definitions

A deliverable definition is the file that states what modules of content are used in your deliverable. It may have sub-definitions for chapters, or other smaller structures. For DITA users, these are maps, submaps or chapter maps.

If your system allows it, then breaking up content into smaller modules will open up the option of a hierarchical system of reuse, allowing more flexibility without any conditions. If you have never used an environment that has these kind of capabilities, this can seem daunting, but in terms of scalability, it actually works much better.

For example, let’s assume you have a chapter called “API References”, and are using conditions to include or exclude them according to products. Conceptually, the chapter might look like this:

API References

Instead using inline conditions, you could move each reference into its own file, and then the file that defines the chapter would reference only the appropriate reference files.

API References, Definition Files

Here we can see that Products A, B and C now have their own definition files and point to the relevant APIs. Each of the API reference chapters would in turn be referenced by the master definition file for the deliverable itself (This might be a Book file, Bookmap, etc. depending on your terminology and tool).

Note that in this system the reference itself does not “know” what products refer to it – that is, there is no mark-up in the references themselves that relates to products. This is very helpful because we don’t need to update the individual files just because a new product leverages it.

If there was a lot of overlap between products – say for subtle variants of products within a family that are part of a larger group – you might have a level of definition file in between pulling together sub-collections into main collections. This can get quite complex, so the skills of an information architect can be useful.

But isn’t this all just complexity shuffling?

I am often asked if we’re not just moving around complexity rather than removing it; We have fewer conditions, but more chunks and more content modelling and information architecture. The answer is yes, that is exactly what we are doing. But that’s good. Complexity shuffling is vitally important to scalability.

We’re all just users after all

Content creators are users too. They’re users of content creation systems, and suffer the same interface overload as their own end users. I’ll refer you back to the idea of progressive disclosure. What is important is not how complex the system is, but how much complexity is thrown at the user at one time, on one screen.

The classic example is an iPhone. They’re praised for their usability, but are they simple? Not at all. Any smartphone is brimming with complexity and features, but they represent thousands or tens of thousands of hours spent complexity shuffling. Usability experts make sure that at any given moment, the user is not overwhelmed with all the things that their phone could do. This is the same logic as untangling conditional overload. We’re shifting complexity elsewhere – sometimes creating it where there was none before – but that’s necessary to:

    1) Reduce the overload in any given authoring task and in a day’s normal work – because most of the complexity is now in planning stages, not in daily authoring.
    2) Reduce the net total complexity experienced by the team, as planning and modelling is done a few times by a smaller number of staff, meaning overall, the team suffers less complexity in the overall system.
    3) Reduce overall complexity such that the entire process in the team works faster and with fewer errors.
Learn more at Congility 2014 – at a 30% discount

I hope you enjoyed this article and its accompanying webinar. If you are left with lots of questions about this and other related topics like:

then join us at Congility 2014 this 18-20 June for a day of workshops and two days of sessions from globally recognised thought leaders and practioners bringing their wisdom and real-world case-studies to the table. For DITA fans we’ll also have Europe’s first session from Michael Priestley addressing Lightweight DITA and IBM’s new take on reuse, Kristen James Eberlein’s first Overview session on DITA 1.3 and author of the DITA Style Guide, Dr. Tony Self delivering a workshop at an unprecedented value.

Followers of DCL are entitled to 30% off entry to this year’s conference if they use the code DCLCA1430 when registering online.

Hope to see you there!


1 I am going to say this now before someone beats me to it. I’ve been sitting on it for years but never use it in a presentation: As the internet and component content took over, we talked long and hard about the death of books. I definitely supported the demise of book-oriented thinking. Today’s content comes in smaller, more agile chunks that can be organised in lots of ways for different needs. The book, and any deliverable really, is simply an artifact that wraps up an arrangement of knowledge at one time to meet one need. So, really, we shouldn’t talk books, apps, web sites, Google Glass Augmented Reality content, and the like anymore at all. We should just talk about bodies of knowledge that are well organised. For clarity around this new post-book, knowledge-oriented paradigm, I formally coin the official term: Bodies of Organised Knowledge, or for short, just the acronym: BOOKs.
Back to article ↑

2 Note that I avoid the term “mark-up” unless I’m sure everyone in the room understands and is familiar with the term, at least knowing it from domains like draft reviewers mark-up or HTML markup.
Back to article ↑


Noz Urbina, Urbina Consulting

Noz Urbina is a Content Strategist and is the Founder of Urbina Consulting.