8

Editorial and production workflows

Volker Böing

Abstract:

This chapter deals with the growing complexity of today’s scientific publishing business and describes possible solutions to tackle the challenges presented by the continuing growth of publication output, as well as the increasing variety of print and electronic formats required for book and journal content. The chapter describes the development of editorial and production workflows based on XML, as well as the integration and automation of systems through business analysis and the realisation of Business Process Management approaches.

Key words

Editorial workflows

production workflows

metadata

XML

validation

business process management

automation

quality assurance

Introduction

For centuries, printed products such as books and journals were the only content media a publisher had to control and take care of, with clear supply chain responsibilities and roles. In parallel with the wider technological developments over the past 15 years (such as growth of the Internet), the academic and professional publishing industry has seen a rapid evolution of new digital publication channels and content formats that have become valuable parts of a publishing house’s product portfolio. Furthermore, the scholarly publishing industry is confronted by the immense international growth in the number of researchers with a concomitant increase in publication output, especially from China and Asian countries which are investing more heavily in research and development (Blaschke, 2003; UNESCO, 2009). This immense increase in information (Marx and Gramm, 2002) and the associated management challenges are of critical importance to scholars, the publishing community and customers alike.

Electronic publishing (ePublishing) promises to cope with this growth in a timely and cost-effective manner, allowing publishing businesses to focus on content for online-first publication as well as reducing print and distribution supply chain demands through the adoption of new, digital printing processes.

Advances in formats and in editorial and production workflows

ePublishing has moved the focus away from print-only distribution channels to electronic-only (e-only) and multi-channel publication approaches (Research Information Network, 2010). Advances in digital printing technologies (BISG, 2010), such as short-run printing and print on demand (POD), have made it possible for publishers to move away from a strategy of printing huge initial quantities of stock which might later need to be pulped, depending on the reality of whether a product meets with market expectations.

The quality of digitally printed products has much improved over the last few years and customers rarely notice a difference between a digitally printed book and a book printed via offset. In offset printing, an image is transferred (offset) from the printing plate to a rubber blanket, and then again to the printing surface, whereas digital printing techniques start from a digital printing image in a computer which is then either produced via laser or inkjet printers. The initial effort required to set up an offset printing job is higher than that needed for digital printing but it is more cost-effective for larger print runs. For example, for a new edition with high potential sales, or for the reprint of a bestseller, a decision must be taken as to how big the offset print run will be. For a low print-run publication, printing to order or even taking an e-only approach would be a good solution to reduce warehouse costs and to allow publication of titles which would never have been published under the old cost-and budget-driven circumstances of a print-only world.

In the past, printing was the main purpose of a production workflow. Today, however, production workflows are themselves developing into content preparation process support systems which create the various formats needed to supply the different distribution channels. Printing uses just one of the many formats created. Today, some publishers approach publications primarily as e-only products, whilst creating print-relevant formats anyway and storing them in a central data repository. Whether these ‘print-ready’ versions are used at a future date is dependent on the publisher’s product placement strategy and on the demands of the market.

As in other media-oriented industries, it should be up to the customer to decide how to consume content, and thus format (including print) becomes a matter of choice to be supported. One sample strategy for dealing with the new expectations for both online and print content is Springer’s MyCopy, where low-cost print products are available on demand to customers who have access to related eBook packages. Other sample strategies are the ‘Espresso’ POD machine, or service providers such as BOD (Books on Demand), which allow individual copies of books to be loaded onto their system to be printed on-site and on demand at a host institution. Increasingly, academic and professional publisher outputs are heading online first, with print levels dropping, replaced and enhanced through new technologies, access models and discoverability tools. To manage these developments and avoid duplication of production efforts downstream, new editorial and production workflows need to be defined and established. But applying digital processes is a highly complex challenge, particularly given the growing amount of data, formats and electronic features compared with the print-oriented publishing process. As well as providing softcover and hardcover print versions, today’s publishing business processes and IT systems have to handle various electronic formats such as PDF, HTML and ePub, which are required by different reading devices (from desktop computers to handheld readers and, increasingly, even smart phones).

Preparing content in each format from scratch can have major cost implications but is a necessary action when historical content also has to be accessed for today’s business models. For frontlist title production, a complete set of formats is desirable and is only a minor effort when they are the result of the initial typesetting, editing and artwork processes for a publication. XML (Extensible Markup Language) derivatives such as ePub can be created from full-text XML, and a onetime investment in a style sheet often does the job without any extra costs.

However, a move from print-oriented to digital print-and-online publishing is not simply a question of production processes, as workflows are very much intertwined with other departments in a publishing company. This is particularly clear when one considers the need for appropriate upstream formatting of electronic files and the addition of adequate and correct metadata to allow for user discoverability in the new electronic architectures that lie downstream.

Metadata and XML-based processing

The previous section explained the demands and requirements of various content formats. One way to achieve these demands automatically is via a layout-independent XML file which can be used to render various content formats. But XML is not only beneficial for content processing, as it can also create data which hold the bibliographic metadata of a publication. Content may still be king but metadata are clearly master. Without a correct and reliable metadata description of content, the content itself is useless in an ePublishing environment where visibility and availability are so important. Every wholesaler needs to put content in the right corner of an online shop, every POD printer needs to print it according to the correct specifications and every librarian needs to place it on the correct shelf of a digital library; metadata are essential for all these purposes. XML coding of text and metadata provides machine-readable data for these various purposes, as well as for production tracking systems, and it has thus become a fundamental tool in electronic content management and production processes. XML is the basis for:

image metadata importing and exporting

image layout-independent content representation

image delivery consistency checking

image semantic enrichment

image cross-reference and forward-linking processes

image workflow job descriptions (technical production parameters and deliverables)

image derived product support

XML data are defined by a so called Document Type Definition (DTD), a set of rules, similar to the grammar of a language, controlling how certain data elements can be grouped and nested. Furthermore, the data elements themselves are defined in the DTD. Using the language metaphor, a DTD is the grammar and the vocabulary for XML files and it is possible to validate and to proof the correctness of sentences in this language against the rules defined. An XML element itself consists of a descriptive opening and close tag and the text content in between. A computer program (applications in this context are called parsers) reads through the ‘sentences’ in an XML file and can semantically interpret what certain text portions mean.

In the XML fragment represented in Figure 8.1, the parser can identify that after the element opening tag < JournalTitle > the title of the journal follows until the closing tag </JournalTitle > is reached. The text content ‘Chromosoma’ thus is given a meaning and can be processed based on that meaning. XML Schema DTDs (Duckett et al., 2001) also define rules about the content within the elements and it is possible to specify date ranges or character settings (for example) which are valid or not valid as content in the appropriate elements. A further step in validation efforts for XML data is Schematron (http://www.schematron.com/), which goes beyond format and content validity checks that can be achieved by DTDs and Schema DTDs by allowing an additional validation of XML content against certain business rules. The right elements may be used and the correct content type tagged with them but the overall XML file might still not be correct if the values of the various elements do not fit together and make sense according to business rules, which are clearly of great importance. For example, an XML feed from a peer-review system to a production environment which holds a < SubmitOnline > date which is after the < Accepted > date is correct with regard to the DTD and the structuring of the data but simply wrong according to the business rule that an article has first to be submitted before it can be reviewed and accepted for publication. This is a very easy example but to identify such rules can be a very labour-intensive and painful exercise when finding new items that have not been thought of from the beginning and when addressing processing errors featuring incorrect data.

image

8.1 Example of XML-encoded journal metadata

The combination of metadata and content structures in one DTD is advisable and hugely beneficial as it brings metadata and content together in one complete electronic package. Such an approach provides elements and attributes for bibliographic data of a publication as well as for the body, front and back matter parts, which can all be described with the elements of one DTD. An XML file based on such a universal publication DTD holds everything which belongs to an article or chapter. Whether such an instance is stored on a file store or as entries in an object-oriented XML database is secondary from a production point of view but can make a difference in the way the XML can be used further down the line for the support of online platform functionalities or for the re-use of already published content.

Header information for journal and book content can contain basic descriptive information such as publisher location, journal title and ISSN. A subset of metadata information for journal content can be seen in Figure 8.1.

Complete and consistent metadata preparation allows for content packages to be clearly identified at any stage of production. This can also be applied to tables, figures and possibly other media objects within XML files, allowing automatic processing to be applied in a straightforward way. The deliverables can then be automatically placed in the correct folders in a file store, with metadata uploading to and updating the product database on import. Metadata updates from a product database to individual publications are possible, and metadata updates are also possible when the XML files are exported from a central production data repository, i.e. user changes are reflected ‘on the fly’ when working with XML and automated systems.

Full-text XML allows for layout-independent content representation (Figure 8.2), including the creation of ePub and HTML files, as well as serving as input for PDF-rendering engines to use in creating print and online files.

image

8.2 Layout-independent XML data and its use

Full-text XML content also supports other products and services which are either not directly related to a publication itself or enrich the existing content. Examples include reference linking from article citations to related book and journal content available online, and cited by linking where articles maintain a record of citation by external publications.

Semantic enrichment (i.e. the identification and tagging of key terms and definitions for use in building linked relationships between contextually relevant material) makes chapter and article content more valuable for the end-user and increases the overall reading and usage experience (Figure 8.3). Full-text XML files can be used to create and feed into additional meta-products such as image databases by using semantically tagged image data created during the production process to retrieve and to re-use images from existing published content.

image

Figure .8.3 Example of semantic linking of terms and definitions

Ideally, the use of XML will already have started in the manuscript tracking/peer review system, providing the initial author and article metadata that can be used to create a technical representation of the future publication within a production IT system. On the content creation side, an author can already be provided with TeX or Word layout templates to ensure that the author uses only those structural elements that are supported by the publisher’s XML DTD. The use of XML to transfer data via web services from one application to another is much more efficient and controllable than the ‘watch-dog’ application/ script-based approaches of the past, where folders on a server were monitored for incoming data that were processed on arrival.

As well as the mere transfer of XML-encoded content and publication data, workflow requirements such as typesetting, artwork and language editing can also be described via XML and validated against expected results on receipt. Delivery consistency checking using XML parsing (checking the delivered content structure against the expected content structure) is relatively straightforward and avoids the kinds of inconsistent and incomplete data that later require painful correction.

Any new requirements arising for DTD and XML encoding of content and metadata can be made through backward-compatible upgrades. Occasionally, of course, major changes to a DTD will be unavoidable and so migration of existing XML markup may be required (which can be problematic for XML files in production, for example with vendors or at application servers awaiting automatic processing). While this can be quite a complicated operation, the benefits of integrated application systems and databases through XML and a DTD definitely outweigh such drawbacks.

Electronic production workflows

Standardisation and global approaches are key to achieving automation in the cooperation between external vendors and internationally operating publishers. Electronic production workflows transfer content items at various production stages between vendor (e.g. those responsible for typesetting, artwork or printing) and publisher IT systems.

Initiating a standardised electronic production workflow requires analysis and specification of the deliverables and processing requirements for the various steps within a journal and book production chain. Developing a common workflow and establishing technical language that both vendors and internal production staff alike can share and communicate with is the backbone of an electronic production workflow. To form a basis for cooperation and standardisation, the following mandatory fields are recommended for inclusion in such a workflow:

image Production Workflow Components (Chapters, Covers, Adverts, etc.)

image XML Creation

image PDF Requirements (Print & Online)

image Artwork Specifications

image Layout

image Trim sizes/Formats

image Author Instructions

image Quality Check Routines

image Copy Editing

image Proof Procedures

image Responsibilities

In the workflow diagram in Figure 8.4, an example of a possible journal production workflow is shown. The definition of stage numbers, which can be referenced in technical instructions and user handbooks, is very useful, while the use of discrete (article or chapters) and compound (issues and books) workflows will form part of a publisher’s own specifications.

image

8.4 Example of a journal article production workflow representation

The discrete workflow shown starts with the initiation of an article object. This can be triggered by direct upload of manuscript data to a production workflow server by a user or through automatic transfer of author source data from a peer review system upon article acceptance. The responsible production personnel can, at this stage, finalise the initial metadata before the content preparation and author query phases of the workflow start in parallel. It is advisable to make use of the time required by the author to go through appropriate questionnaires and the transfer of copyright to proceed with the first typesetting and language editing round. This keeps the overall publication time down and gives the author a timely publishing experience.

After finalisation of the proof stage an article can already be published online, at which point the article will be citable via its Digital Object Identifier (DOI). A DOI is a string of characters which is a unique identifier for content as well as being a permanent link to the content. Scholarly publishers assign DOIs through the CrossRef System. CrossRef issues the publisher with a prefix (e.g. 10.1007 for the scientific publisher Springer) and the publishers assign a unique suffix string identifying the publication itself. 10.1007/s00412-011-0329-6 identifies an article in the journal ‘Chromosoma’ and is resolved when used with the DOI Proxy server –http://dx.doi.org/10.1007/s00412-011-0329-6 – to the publisher’s online platform by using the Uniform Resource Locator (URL) registered with the DOI (Figure 8.5). Such ‘key/value’ pairs allow publishers to update or change the physical location of content without altering the DOIs themselves: a DOI stays valid although the physical location of the content is changed; it is a permanent way to cite electronic versions of scholarly content. CrossRef provides services such as the reference linking process mentioned above.

image

8.5 Reference linking and DOI metadata storage

Once all articles comprising a journal issue are published online, a compound workflow can begin to build the issue, assigning articles under the familiar volume, issue and year structure of a journal. Different approaches to issue building are practised by different publishers depending on the needs and wishes of their customers, with some opting for a more or less automatic assignment of articles to an issue (first in, first out), and others deciding which articles should end up in a certain issue and, within that, in what sequence (here the editor-in-chief or members of the editorial board are likely to be most active in deciding the issue structure). When the issue structure has been built, the files can be sent for final typesetting and pagination, including updating bibliographic metadata at the article level. For those publishers adopting automatic issue building and assigning articles consecutively during the publication year, the final bibliographic metadata can be made available at the time of article acceptance and handover to production.

An electronic production workflow should streamline the production process and provide the ability to create all necessary output formats in one go, especially as customers are increasingly interested in XML, HTML, PDF and ePub outputs as replacements for, or additions to, a printed product. The number of formats which are actually stored in a publisher’s content repository can be reduced to a layout-independent XML file of the content together with related media objects, in principle allowing for all other formats to be created on the fly. Data feeds in the form of ONIX or proprietary formats can be rendered alongside these content formats, depending upon customer requirements.

When implementing an electronic production workflow like the one described above, it is important to aim for a technical system design which is as configurable and modular as possible to enable sufficient flexibility for future development. This is particularly relevant to examples of workflow change and where new responsibilities crop up, as well as to supporting the potential for new product development. Such modularity means it should not be necessary to re-build major parts of a system or to interfere with existing, reliable and accepted functionalities. It should only be necessary to enhance the relevant content model or technical modules to provide additional functionalities or processes for new format outputs.

Business process management and IT systems development

Any definition of the term Business Process Management is heavily influenced by IT but an understanding of it in a broader sense, with an integrated view of management, organisation and control, allows a beneficial, goal-oriented approach to design and implement business processes successfully. IT is an important part of this and a supporting component (Schmelzer and Sesselmann, 2010).

Business process management analysis and specification approaches are critical to integrating global ePublishing systems composed of electronic manuscript submission, electronic and traditional print production processes, and associated business and market information subsystems.

Establishing ‘who does what, when, and how?’ across the publishing business process chain of editorial, production, sales, marketing and fulfilment for book and/or journal content is a challenge for any publisher to consider, and even more so to apply. Ideally, ePublishing systems should be put in place with the functionality to support the various departments and task-responsible people within the overall process chain. Although not everything that can be supported by IT system functionality has to be supported, everything that is required for consistent journal or book products should be mandatory in the content and workflow management systems. A global system should support mandatory steps from initial metadata gathering, through content creation and production, and on to transfer of data (content and metadata) between internal users, external vendors, customers, content aggregators and distributors.

An analysis of the business is the initial step to setting up any such system, as seemingly simple questions with regard to relevant technical parameters can result in a wide variety of answers. Such a variety of existing instructions can lead to a huge number of different production processes on the vendor’s side, even if from a bird’s eye view there is a one-to-one relationship between publisher and vendor; rationalising these approaches is important for establishing a robust and efficient system.

Business analysis can also have a value in itself, quite apart from helping to develop an appropriate technical workflow. In clearly answering the questions of ‘who does what, when and how?’, an analyst needs to review the roles, responsibilities and dependencies of particular business process steps in terms of both short-term needs but also looking to the longer term, i.e. investigating the potential for future improvements and optimisation of systems and procedures.

Given the impossibility of talking to all staff in medium to large companies, particularly those with multiple business locations across the globe, the introduction of a key-user concept is advisable, as this allows for representatives to be invited to explain the business process and discuss the specific requirements of their peer group. Extracting all the detail of common processes and potential future developments from the received insights and discussions will be a challenge. Many departments may have good reasons for their specific ways of approaching processes, including the simple fact that things are done in a certain way because that’s how they have always been done!

In smaller companies, analysing ‘who does what, when and how?’ may be easier to accomplish but, simultaneously, resources to implement overall change may be more restricted. An alternative solution is to ‘buy-in’ pre-packaged systems from vendors who can effectively perform a ‘business analysis’ as part of their tendering and business development processes. Fast start-up and quick wins can be achieved by introducing such an environment based on a vendor’s system but any further advantages can sometimes prove hard to achieve after the introduction of such a system. Suppliers of such systems sell a product for a number of publishers and the provided functionality is often the result of what is common amongst the existing customers, and so might not cater fully for an individual publisher’s specific workflow. Nevertheless, especially for smaller publishers, this can be a very good solution if the appropriate competence team and IT organisation doesn’t exist in-house.

As in every other business field, change can be perceived as a threat to existing procedures, and resistance is therefore to be expected if change is implemented the wrong way. But when change is approached in an open, flexible and cooperative manner, it can also be a great opportunity for fruitful discussions and a motivating experience for every participant. This is especially true where the potential for systems optimisation, and the great opportunity this presents for quality and efficiency, is recognised.

Bringing people on board during an analysis and design phase is crucial for the overall success of a business process management project. Getting a clear and logical conceptual framework that is easily understood in the first place improves user buy-in before the actual technical implementation starts. Indeed, even if the first version of an IT system does not match expectations, the trust gained during an initialisation, analysis and design phase can carry the project through.

Once agreement is reached on a conceptual framework for future internal and external business process steps and activities, the functional requirements can be developed through a technical design phase with developers and system architects. These developers and architects would ideally have already participated in the initial user/peer group discussions, in order to better understand where the system needs to be heading. They can also feed into these earlier discussions with appropriate interventions when visions of future system functionality lead proceedings in directions which cannot be realised without higher costs and more effort than is reasonable.

Continuous progress updating is advisable during the implementation phase to keep users informed about the realisation of concepts they took part in defining during the decision-making process. Agile programming methodologies, such as SCRUM,1 can be very useful here, with small and fast development cycles providing reassurance that developments are heading in the right direction. Mock-ups and story boards can also provide useful means of ensuring transparent development.

To facilitate the technical implementation of the proposed IT system, a clear representation of the products and processes covered – including books and journals and the range of publication types in between – is required. This object model has to be well defined with a well-designed data structure that is flexible enough to fulfil all needs but without becoming too complex, or precluding future maintenance and further changes that may arise during the lifetime of the system (Figure 8.6).

image

8.6 Example of a book object model and associated business processes

The system should be able to support content management processes and workflows, contracts, content files and supplementary electronic materials, marketing texts, author biography and other publication-relevant material. At the end of a publishing process, a final record within the system should contain everything of relevance to print and online publication as well as long-term archiving and preservation, ideally reflected in an XML DTD for import and export of content and metadata.

Another important field where business process management approaches can be beneficial is in communication and data transfer with external vendors during the workflow stages of relevance to production. A clear specification of vendor tasks and expected deliverables can introduce a location-independent route to efficient product creation. The benefits of such a standardised way of publishing book and journal content are huge, although limitations here include a loss of flexibility in realising special requirements or ideas, given the complex nature of globally used systems with a limited potential for localised variations.

An Enterprise Content Management system (Figure 8.7) with the aim and scope described above is a ‘living thing’. Permanent monitoring and an open ear to users and management is necessary for the continuity and value of this kind of asset.

image

8.7 Example of an Enterprise Content Management model

The conceptual overview of a Process and Content Management system shown in Figure 8.6 names the main components of a possible publisher’s IT infrastructure. Starting with a Peer Review System, where the decision-making process (‘Should the title be published?’) is supported and the collaboration of editorial and in-house or external reviewers takes place, such a system provides external as well as internal editorial board members with the appropriate means to cope with the publication queue of a journal. After a publication is accepted, an automatic transfer of initial metadata such as peer review dates, author contact information and the author’s source data (manuscripts including the raw data for images, tables, text and possible electronic supplementary material) can greatly improve the overall turnaround time of a publication. Once accepted, product data and production systems take over, steering the production chain and providing life cycle, content management, validation and workflow functionality.

Such a product and production system provides an overview of frontlist title production as well as of the historical pool of publications. Product data and production systems can be integrated or separate, with a hand over to production step from a pure product database to a more content management-oriented production system. In the case of outsourced typesetting, artwork, XML creation and proof handling, vendors have to provide the appropriate IT infrastructure to do this work, as shown in Figure 8.7. In the case of in-house creation of content files, the production system on the publisher’s side has to have the appropriate tools available to complete tasks. Finally, accepted and published versions of publications have to be archived and distributed, and such system functionality can be integrated in a single production IT system or be part of a separate one. In Figure 8.7, only the final version is sent to the archiving and distribution system, and the various previous versions created during the initial stages are deleted and vanish from the production system, where only the current frontlist title version remains. Throughout all of these system components, it is necessary to define tasks (up to job descriptions), stages, responsibilities, deliverables, validation rules and consistency constraints and to provide sufficient tracking and monitoring functionalities.

Quality assurance

Key performance indicators can show whether something is running well but they are of limited use in explaining the root cause of a problem within the publishing/production chain; they would not, for example, detect high attrition rates on the vendor’s side as a reason for a sudden peak in disapproval rates in proof stages. Such staff turnover issues are of particular concern when outsourcing work to vendors (e.g. in Asia), making it necessary to continuously ensure a common technical basis of understanding amongst internal users as well as between internal staff and external vendors.

It is necessary to establish appropriate quality assurance measures to maintain and develop an integrated publishing approach. Continuous monitoring of usage (how working procedures are applied) and results allows a publisher to ensure that systems keep pace with the requirements of the business.

The following areas can be the subject of quality assurance measures:

image enforcement of standardised processes and production methods

image error reduction

image turnaround times

image disapproval rates

image training of users

image vendor and internal department audits

image author satisfaction surveys

Any updates or changes to the system arising out of quality assurance feedback or for other business reasons also need to be clearly communicated and incorporated into existing specifications if standardised approaches are to be maintained. In this respect, it is also important that vendor IT systems are able to communicate with a publisher’s technical infrastructure.

Conclusion and outlook

Whilst in the past it was most important for a printed product to be in good shape with bibliographic metadata correct on the cover and on title pages, in today’s digital arena this is not sufficient. Changes in distribution and fulfilment processes have seen a fall in book and journal print levels. Traditional production workflows are migrating to pure content preparation electronic workflows, handling new and more complex kinds of author source data in the same way, and using XML-based workflows to achieve layout-independent content that can be output to multiple print and electronic formats. These changes have led to publisher product databases playing a central role in any company’s operations. To correctly maintain the effectiveness of this central resource, complete and consistent metadata are required.

Business process management can bring value to publishers in terms of the efficiency, sustainability, agility and quality benefits that analysis and specification can help to realise. Integrated content management systems are becoming a must-have item, especially for publishers with increasing book and journal throughput.

Looking forward, the technical potential of rendering engines and database publishing will allow publishers to manage and build their businesses, and it seems very likely that continuing advances in ePublishing will lead to even greater levels of automation, especially in the context of content production workflows.

References

BISG (Book Industry Study Group). Digital Book Printing For Dummies®. BISG/Wiley., http://www. bisg. org/publications/product. php?p=20, 2010.

Blaschke, S., Die Informationsexplosion und ihre Bewältigung: Gedanken zur Suche nach einem besseren System der Fachkommunikation. Information –Wissenschaft und Praxis, 2003;54(6) http://www. archive. org/details/DieInformations explosionUndIhreBewltigung

Ducket, J., Griffin, O., Mohr, S. Professional XML Schemas (Programmer to Programmer), illustrated ed. Wrox Press; 2001.

Marx, W., Gramm, G. LiteraturflutInformationslawineWissensexplosion. Wächst der Wissenschaft das Wissen uber den Kopf? http://www. mpi-stuttgart. mpg. de/ivs/literaturflut. html, 2002.

Research Information Network. E-only scholarly journals: overcoming the barriers. http://www. rin. ac. uk/our-work/communicating-and-disseminating-research/e-only-scholarly-journals-overcoming-barriers, 2010.

Schmelzer, 20. J., Sesselmann, W. Geschäftsprozessmanagement in der Praxis: Kunden zufrieden stellen – Produktivität steigern – Wert erhöhen, 2010. [7., überarb. Aufl. – Hanser Verlag].

UNESCO Institute for Statistics, http://www. kooperation-international. de/detail/info/unesco-institute-for-statistics-a-global-perspective-on-research-and-development. html, 2009


1A methodology of agile software engineering with iterative, incremental software deployment cycles.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset