The following are questions that were submitted during the webinar: Twelve DITA Implementations: The Lessons Learned. The responses were provided by Data Conversion Laboratory and Mekon.
This presentation discussed the results of interviews with the implementers of 12 DITA installations. Find out what they did right. What went wrong? What were the business drivers? How long did it really take? Learn from your peers across a broad range of industries what really happened, and learn how your expectations & results measure up.
In case you missed it, you can listen now.
Q1. How much education were the stakeholders given prior to beginning any effort to move to DITA?
That was not a question that was part of our initial survey, but for the Mekon case study example, where we have detailed information, stakeholders were given education with frequency and depth according to their 'closeness' to the project. The pilot team was getting conceptual introductions and even complete DITA training courses up to 24 months before the actual implementation (which is possibly what you mean by "effort to move"?) began. There was also some major sessions of training or education at least every six months and less formally in between those. The authors not directly involved in the pilot were given conceptual and process educational sessions (usually webinars and presentations of 30-90 minutes every three months or so from 24 months out).
The less directly involved stakeholders, like service and support staff, were given briefs on how the processes might differ and what feedback was requested from them, also from about 18 months out, as part of the requirements development process. Again, their education was in the form of web presentations and a few onsite workshops to review current issues and future requirements.
Q2. Has the page break issue improved with PDF and printed outputs? The ones I have seen tend to have some horrendous page breaks!
Pagination and image placement are inherent issues in automation, since computers don't do very well with the aesthetic issues that are largely subjective. You may have set up a document with page-breaks as you want them, then when someone changes a shared topic somewhere in the middle of a chapter, that throws off the pagination again. In a collaborative DITA environment, page breaks are often the kids who suffer when we divorce content from format.
There are a few parts to the resolution of this issue. First, the speed and economic benefits of a "lights out" publishing process outweigh these aesthetic imperfections for many kinds of documents; what we call the "good enough" solution. DITA can save organizations hundreds and often thousands of man-hours a year and also speeds up production and improves deliverables for end users; but, yes, the page breaks aren't as nice.
Second, depending on how your content and team are set up, and what publishing tool you use, maybe you can cheat. We have implementations where, instead of a fully automated 'black box' XSL:FO system which takes in DITA and spits out a PDF, we use FrameMaker. This allows you to 'cheat' by capturing the FM files on the way out and manually adjust them. This also lets you automate web/alternative formats completely, but fudge the print to your liking. We don't fully recommend this as it's not the best practice, but it is an option for when a page break is make-or-break.
Several companies in our surveys reported that for those materials in which the aesthetics and layout precision are more important, they use a post-process to compose those portions.
Q3. What are the most commonly adopted CMS?
There are comparatively few CCMSs (Component Content Management Systems) that can do DITA properly. In our survey the CMSs utilized included (in alphabetical order):
Ixiasoft DITA CMS
Really Strategies DocZone
SDL Trisoft (and Content@)
Q4. What are the best Web based WYSIWYG editors for DITA?
In our survey the WYSIWYG XML editors utilized included (in alphabetical order):
Syntext Serna XML
Q5. Noz, what tools did you use to identify reusable content?
We did it with a document analysis. There are three principal ways to identify reusable content. The easiest, in a post-process, is to let the CCMS itself find reusable content (if it has that functionality - and then it's usually only for exact content matches). That process would typically miss the more 'similiar' content. For this there is the mostly-manual option of a detailed document analysis that leverages knowledge of the content to 'search and destroy'; similar concepts should be the same. The more automated approach, which can be used before conversion and before you even have a CMS, is DCL's Harmonizer which takes an automated approach to find all instances of similar content and group them together for analysis.
Once the similar content is identified, the SME can determine if it should be harmonized.
Q6. Did any of the case studies have a significant number of non-techwriter authors? (i.e., SMEs directly contributing content in DITA)
The companies in DCL's survey all reported having teams of technical writers, with non-techwriters providing materials at some level.
In the Mekon case, like many of the projects, there was significant SME-generated content. It was being generated in the offices where there were a smaller number of professional authors; therefore more content creation was pushed out onto the SMEs. In this particular case, the intention is that additional author resources will be added to start re-writing content according to very strict content standards, with SMEs as reviewers. To try to bridge the gap between SMEs and authors, they were trained in Technical Simplified English to help them write in a more minimalist and standardized way. Then, when the content got to authors, it was less work to move it to DITA. In other projects, we've put MS Word-based XML tools out to the SME group so that they could author directly into DITA. We have also pushed out templates that are designed to round-trip content from Word to XML to streamline the process. This can be suitable in cases where installing XML authoring tools or providing guidance/training for SMEs is not practical.
Q7. Which tools would you recommend we use to identify reusable content?
See answer to Q5.
Q8. What were the key attributes of the CMS that played into the selection?
In general we encourage that you look at:
Typical CCM baseline functions:
Check in/out, versioning and roll-back, state-based workflow that can trigger automation scripts, etc.
Scalability (how many concurrent users you anticipate)
Pricing model (some favor smaller or larger roll-out heavily)
Quality of implementation of conref and DITAmap editing
Handling of DITA prolog metadata and the expense of using DITA prolog metadata, and/or easily customizable metadata model
Handling of DITA conditional attributes and reporting on them (seeing condition used in a map for example)
Allow support for dynamic (always update with the latest content) or static content reuse (don't update my content unless I explicitly ask for the latest versions of everything)
Reputation among industry experts and/or reference customers with whom you have been able to make contact
It is also nice to have:
Pre-checking of broken links without having to build the output
The authoring tool of your choice integrated
Sensible reviewing features (most don't have these!)
SME contributor tools/licenses with appropriate lower price-points
References in your industry (this is one our clients like but as long as the functions are there, it's not always as relevant as one might think).
Q9. Where CMS selection changed in mid-project, was this related to DITA support (or lack thereof) within the original selection, and if not, on what was the decision predicated?
Regarding the two cases in the DCL survey, the needed change seemed to be because the system was 'sold' without sufficient piloting and requirements development. In retrospect, they didn't do a full analysis and jumped straight into tools and implementation. What then happens is that someone realizes that they need something that the system either can't do, or the company involved is not willing to support them in developing.
Q10. Publishing to HTML is much about linking/referencing, which you said should not be used within DITA topics. So my question is: how much of former, targeted authoring quality could be kept during the move to DITA?
It's not that links are not to be used within DITA. The place for links is not inline and not in the topics themselves. Links in DITA are best made in the map that binds topics together. At rendering time, these are inserted automatically at the bottom of the topic. Alternatively, related links can be placed directly at the bottom of the topic in cases where that link is always going to be valid for that topic, no matter where it appears. Embedded links, or 'inline links', can break when created in a topic in one context and then moved into another where the destination being linked to is no longer available. Many experts and industry folk believe that inline links aren't sensible anyway, myself included. There are several references to this raging debate online.
Topics are generally smaller and more compact than traditional narrative prose, so the in-between topics (at the side, at the bottom, or wherever you like, as long as they're not inline) are not actually such a big wait for the user to see related information. Some people still believe related links are not to be used in print materials because 'it's not traditional', but I think that argument is about as sound as saying content should not be consumed on tablet PCs because 'they're new and weird'.
If you'll allow me to get on my high horse for a moment, I'll link you to one of my favorite bloggers, Tom Johnson, who has just been pushed over the edge into believing that inline links are distracting for this fundamental reason; he says, "Every time your readers see a hyperlink in your text, they have to pause and ask themselves whether they should click that link and follow that path, or just stay the course ahead." He's collated some references on this blog post so you can continue researching:
At the end of the day, DITA doesn't enforce this rule and your CMS can arguably protect you from accidently releasing broken links, so in answering your question, DITA does nothing to change the quality of your writing. You can choose to ignore its suggested practice and write how you like, however, if applied in full, many feel (and many more over time) that writing quality is actually more targeted and of higher quality if one separates the operations of reading from navigation via links.
Q11. In your view, is the true value of DITA in how it can help you manage your content, or how it can help you publish the content in flexible ways?
This is very much a question of your situation and business. Without understanding more about it, any advice we give here is simply what do 'most companies' get their DITA value from. You want to be focusing on where you might get your value. DITA may be equally good at content management and flexible publishing, but content management is harder than automatic publishing, as it involves changes to workflows, collaboration paradigms and mindsets. So, most organizations probably realize the publishing benefits because they're easier. That doesn't mean they're better benefits - I'd argue that they're not - but they're the lower-hanging fruit. An organization can invite a consultant to convert their content, implement a system, and get three outputs at the end of the process instead of one, but that is only scratching the surface of the potential.
What DITA does for them or us is not really important; what you and your business need, and whether or not DITA can do that, is. I've sort of assumed that by saying 'flexible' publishing you meant only multi-format or single-source publishing. Real content repurposing and dynamic publishing could mean something quite different, in which case, unless you do both, you're not going to achieve the end goal. If you want really 'flexible' content, you're going to have to manage your creation and management processes properly or else you'll end up with Franken-docs that don't read properly when content is reassembled in new ways.
Click to hear the full webinar: Twelve DITA Implementations: The Lessons Learned