Preface
Over time, things change—things like customers, products, accounts, and so forth. But most of the data we keep about things describes what they are like currently, not what they used to be like. When things change, we update the data that describes them so that the description remains current. But all these things have a history, and many of them have a future as well, and often data about their past or about their future is also important.
It is usually possible to restore and then to retrieve historical data, given enough time and effort. But businesses are finding it increasingly important to access historical data, as well as data about the future, without those associated delays and costs. More and more, business value attaches to the ability to directly and immediately access non-current data as easily as current data, and to do so with equivalent response times.
Conventional tables contain data describing what things are currently like. But to provide comparable access to data describing what things used to be like, and to what they may be like in the future, we believe it is necessary to combine data about the past, the present and the future in the same tables. Tables which do this, which contain data about what the objects they represent used to be like and also data about what they may be like later on, together with data about what those objects are like now, are versioned tables.
Versioned tables are one of two kinds of uni-temporal tables. In this book, we will show how the use of versioned tables lowers the cost and increases the value of temporal data, data that describes what things used to be like as well as what they are like now, and sometimes what they will be like as well. Costs, as we will see, are lowered by simplifying the design, maintenance and querying of temporal data. Value, as we will see, is increased by providing faster and more accurate answers to queries that access temporal data.
Another important thing about data is that, from time to time, we occasionally get it wrong. We might record the wrong data about a particular customer's status, indicating, for example, that a VIP customer is really a deadbeat. If we do, then as soon as we find out about the mistake, we will hasten to fix it by updating the customer's record with the correct data.
But that doesn't just correct the mistake. It also covers it up. Auditors are often able to reconstruct erroneous data from backups and logfiles. But for the ordinary query author, no trace remains in the database that the mistake ever occurred, let alone what the mistake was, or when it happened, or for how long it went undetected.
Fortunately, we can do better than that. Instead of overwriting the mistake, we can keep both the original customer record and its corrected copy in the same table, along with information about when and for how long the original was thought to be correct, and when we finally realized it wasn't and then did something about it. Moreover, while continuing to provide undisturbed, directly queryable, immediate access to the data that we currently believe is correct, we can also provide that same level of access to data that we once believed was correct but now realize is not correct.
There is no generally accepted term for this kind of table. We will call it an assertion table. Assertion tables, as we will see, are essential for recreating reports and queries, at a later time, when the objective is to retrieve the data as it was originally entered, warts and all. Assertion tables are the second of the two kinds of uni-temporal tables. The same data management methods which lower the cost and increase the value of versioned data also lower the cost and increase the value of asserted data.
There are also tables which combine versions and assertions, and combine them in the sense that every row in these tables is both a version and an assertion. These tables contain data about what we currently believe the objects they represent were/are/will be like, data about what we once believed but no longer believe those objects were/are/will be like, and also data about what we may in the future come to believe those objects were/are/will be like. Tables like these, tables whose rows contain data about both the past, the present and the future of things, and also about the past, the present and the future of our beliefs about those things, are bi-temporal tables.
In spite of several decades of work on temporal data, and a growing awareness of the value of real-time access to it, little has been done to help IT professionals manage temporal data in real-world databases. One reason is that a temporal extension to the SQL language has yet to be approved, even though a proposal to add temporal features to the language was submitted over fifteen years ago. Lacking approved standards to guide them, DBMS vendors have been slow to build temporal support into their products.
In the meantime, IT professionals have developed home-grown support for versioning, but have paid almost no attention to bi-temporality. In many cases, they don't know what bi-temporality is. In most cases, their business users, unaware of the benefits of bi-temporal data, don't know to ask for such functionality. And among those who have at least heard of bi-temporality, or to whom we have tried to explain it, we have found two common responses. One is that Ralph Kimball solved this problem a long time ago with his three kinds of slowly changing dimensions. Another is that we can get all the temporal functionality we need by simply versioning the tables to which we wish to add temporal data.
But both responses are mistaken. Slowly changing dimensions do not support bi-temporal data management at all. Nor does versioning. Both are methods of managing versions; but both also fall, as we shall see, far short of the extensive support for versioning that Asserted Versioning provides.
Objectives of this Book
Seamless Access to Temporal Data
One objective of this book is to describe how to manage uni-temporal and bi-temporal data in relational databases in such a way that they can be seamlessly accessed together with current data. 1 By “seamlessly” we mean (i) maintained with transactions simple enough that anyone who writes transactions against conventional tables could write them; (ii) accessed with queries simple enough that anyone who writes queries against conventional tables could write them; and (iii) executed with performance similar to that for transactions and queries that target conventional data only.
1Both forms of temporal data can be implemented in non-relational databases also. For that matter, they can be implemented with a set of flat files. We use the language of relational technology simply because the ubiquity of relational database technology makes that terminology a lingua franca within business IT departments.
Encapsulation of Temporal Data Structures and Processes
A second objective is to describe how to encapsulate the complexities of uni-temporal and bi-temporal data management. These complexities are nowhere better illustrated than in a book published ten years ago by Dr. Richard Snodgrass, the leading computer scientist in the field. In this book, Developing Time-Oriented Database Applications in SQL (Morgan-Kaufmann, San Francisco, 2000), Dr. Snodgrass provides extensive examples of temporal schemas and also of the SQL, for several different relational DBMSs, that is required to make uni- and bi-temporality work, and especially to enforce the constraints that must be satisfied as temporal data is created and maintained. Many of these SQL examples are dozens of lines long, and quite complex.
This is not the kind of code that should be written over and over again, each time a new database application is developed. It is code that insures the integrity of the database regardless of the applications that use that database. And so until that code is written by vendors into their DBMS products, it is code that should exist as an interface between applications and the DBMS that manages the database—a single codebase used by multiple applications, developed and maintained independently of the applications that will use it. A codebase which plays this role is sometimes called a data access layer or a persistence and query service framework.
So we have concluded that the best way to provide temporal functionality for databases managed with today's DBMSs, and accessed with today's SQL, is to encapsulate that complexity. Asserted Versioning does this. In doing so, it also provides an enterprise solution to the problem of managing temporal data, thus supporting both the semantic and physical interoperability of temporal data across all the databases in the enterprise.
Asserted Versioning encapsulates the design, maintenance and querying of both uni-temporal and bi-temporal data. Design encapsulation means that data modelers do not have to design temporal data structures. Instead, declarative specifications replace that design work. These declarations specify, among other things, which entities in a logical data model are to become bi-temporal tables when physically generated, which column or columns constitute business keys unique to the object represented, and between which pairs of tables there will exist a temporal form of referential integrity.
Maintenance encapsulation and query encapsulation mean, as we indicated earlier, that inserts, updates and deletes to bi-temporal tables, and queries against them, are simple enough that anyone who could write them against non-temporal tables could also write them against Asserted Versioning's temporal tables. Maintenance encapsulation, in the Asserted Versioning Framework (AVF) we are developing, is provided by an API, Calls to which may be replaced by native SQL issued directly to a DBMS once temporal extensions to SQL are approved by standards committees and implemented by vendors. 2 Functioning in this way as a persistence framework, what the AVF persists is not simply data in relational tables. It persists both assertions and versions, and it enforces the semantic constraints between and among these rows which are the temporal analogues of entity integrity and referential integrity.
2As we go to press, we are attempting to support “Instead Of” triggers in release 1 of the AVF. With these triggers, single-statement SQL inserts, updates and deletes can be translated by the AVF into the SQL statements that physically modify the database. Often, this translation generates several SQL statements from the single statement submitted to it.
Functioning as a query service framework, Asserted Versioning provides query encapsulation for access to current data by means of a set of views which allow all queries against current data to continue to work, without modification. Query encapsulation is also provided for queries which need seamless access to any combination and range of past, present and future data, along either or both of two temporal dimensions. With asserted version tables guaranteed to contain only semantically well-formed bi-temporal data, queries against those tables can be remarkably simple, requiring only the addition of one or two point or period of time predicates to an otherwise identical query against current data.
Enterprise Contextualization
A third objective of this book is to explain how to implement temporal data management as an enterprise solution. The alternative, of course, is to implement it piecemeal, as a series of tactical solutions. With tactical solutions, developed project by project for different applications and different databases, some will support temporal semantics that others will not support. Where the same semantics are supported, the schemas and the code that support them will usually be different and, in some cases, radically different. In most cases, the code that supports temporal semantics will be embedded in the same programs that support the application-specific semantics that have nothing to do with temporality. Federated queries, attempting to join temporal data across databases temporalized in different ways by different tactical solutions, will inevitably fail. In fixing them, those queries will often become several times more complex than they would have been if they had been joining across a unified enterprise solution.
Asserted Versioning is that enterprise solution. Every table, in every database, that is created as an asserted version table, or that is converted into an asserted version table, will support the full range of bi-temporal semantics. A single unit of code—our Asserted Versioning Framework (AVF), or your own implementation of these concepts—will support every asserted version table in every database.
This code will be physically separate from application code. All logic to maintain temporal data, consequently, will be removed from application programs and replaced, at every point, by an API Call to the AVF. Federated queries against temporal data will not need to contain ad hoc manipulations whose sole purpose is to resolve differences between different implementations of the same temporal semantics, or to scale a more robust implementation for one table down to a less expressive one for another table.
As an enterprise solution, Asserted Versioning is also a bridge to the future. That future is one in which temporal functionality will be provided by commercial DBMSs and directly invoked by SQL transactions and queries. 3 But Asserted Versioning can be implemented now, at a pace and with the priorities chosen by each enterprise. It is a way for businesses to begin to prepare for that future by removing procedural support for temporal data from their applications and replacing it with declarative Call statements which invoke the AVF. Hidden behind API Calls and views, the eventual conversion from Asserted Versioning to commercially implemented solutions, if the business chooses to make that conversion, will be nearly transparent to the enterprise. Most of the work of conversion will already have been done.
3Although the SQL standard does not yet include temporal extensions to accommodate bi-temporal data, Oracle Corporation has provided support for several aspects of bi-temporality in its 11g Workspace Manager. We review Workspace Manager, and compare it to Asserted Versioning, in a separate document available on our Elsevier webpage and also at AssertedVersioning.com.
But other migration strategies are also possible. One is to leave the AVF in place, and let future versions of the AVF retire its own code and instead invoke the temporal support provided by these future DBMSs, as vendors make that support available. As we will see, in particular in Chapters 12, 13 and 16, there is important bi-temporal functionality provided by Asserted Versioning that is not yet even a topic of discussion within the computer science community. With the Asserted Versioning Framework remaining in place, a business can continue to support that important functionality while migrating to commercial implementations of specific temporal features as those implementations become available, and it can do this without needing to modify application code.
Internalization of Pipeline Datasets
The final objective of this book is to describe how to bring pending transactions into the production tables that are their targets, and how to retain posted transactions in those same tables. Pending transactions are insert, update and delete statements that have been written but not yet submitted to the applications that maintain the production database. Sometimes they are collected outside the target database, in batch transaction files. More commonly, they are collected inside the target database, in batch transaction tables. Posted transactions, as we use the term, are copies of data about to be inserted, and before-images of data about to be updated or deleted.
Borrowing a metaphor common to many logistics applications, we think of pending transactions as existing at various places along inflow pipelines, and posted transactions as data destined for some kind of logfile, and as moving towards that destination along outflow pipelines. So if we can bring pending transactions into their target tables, and retain posted transactions in those same tables, we will, in terms of this metaphor, have internalized pipeline datasets. 4
4“Dataset” is a term with a long history, and not as much in use as it once was. It refers to a named collection of data that the operating system, or the DBMS, can recognize and manage as a single object. For example, anything that shows up in Windows Explorer, including folders, is a dataset. In later chapters, we will need to use the term in a slightly different way, but for now, this is what we mean by it.
Besides production tables, the batch transaction files which update them, and the logfiles which retain the history of those updates, production data exists in other datasets as well. Some production tables have history tables paired with them, in which all past versions of the data in those production tables is kept. Sometimes a group of rows in one or more production tables is locked and then copied to another physical location. After being worked on in these staging areas, the data is moved back to its points of origin, overlaying the original locked copies of that data.
In today's world of IT data management, a great deal of the Operations budget is consumed in managing these multiple physical datasets across which production data is spread. In one sense, that's the entire job of IT Operations. The IT Operations schedule, and various workflow management systems, then attempt to coordinate updates to these scattered datasets so those updates happen in the right sequence and produce the right results. Other tools used to insure a consistent, sequenced and coordinated set of production data across the entire system of datasets and pipelines, include DBMS triggers associated with various pre-conditions or post-conditions, asynchronous transaction managers, and manually coordinated asynchronous feeds from one database to another.
These processes and environments are both expensive to maintain and conducive to error. For example, with history tables, and work-in-progress in external staging areas, and a series of pending transaction datasets, a change to a single semantic unit of information, e.g. to the policy type of an insurance policy, may need to be applied to many physical copies of that information. Even with triggers and other automated processes to help, some of those datasets may be overlooked, especially the external staging areas that are not always there, and so are not part of regularly scheduled maintenance activity. If the coordination is asynchronous, i.e. not part of a single atomic and isolated unit of work, then latency is involved, a period of time in which the database, or set of databases, is in an inconsistent state. Also, error recovery must take these interdependencies into consideration; and while the propagation of updates across multiple datasets may be partially or completely automated, recovery from errors in those processes usually is not, and often requires manual intervention.
This scattering of production data also affects those who write queries. To get the information they are looking for, query authors must know about these scattered datasets because they cannot assume that all the data that might be qualified by their queries is contained in one place. Across these datasets, there are differences in the life cycle stage of the various datasets (e.g. pending transactions, posted transactions, past, present or current versions, etc.). Across these datasets, there will inevitably be some level of redundancy. Frequently, no one table will contain all the instances of a given type (e.g. all policies) that are needed to satisfy a query.
Think of a world of corporate data in which none of that is necessary, a world in which all pipeline datasets are contained in the single table that is their destination or their point of origin. In this world, maintaining data is a “submit it and forget it” activity, not one in which maintenance transactions are initially created, and then must be shepherded through a series of intermediate rest and recuperation points until they are eventually applied to their target tables. In this world, a query is never compromised because some source of relevant data was overlooked. In this world, production tables contain all the data about their objects.
Asserted Versioning as Methodology and as Software
This book presents the concepts on the basis of which a business could choose to build its own framework for managing temporal data. But it also describes software which we ourselves are building as we write this book. A prototype of this software is available at our website, AssertedVersioning.com, where interested users can submit both maintenance transactions and queries against a small bi-temporal database. Our software—the Asserted Versioning Framework, or AVF—generates bi-temporal tables from conventional logical data models, ones which are identical to models that would generate non-temporal database schemas. The data modeler has only to indicate which entities in the logical model should be generated as bi-temporal tables, and to supply as metadata some additional parameters telling the AVF how to manage those tables. There is no specific temporal design work to do.
In its current manifestation, this software generates both its schemas, and the code which implements the rules enforcing temporal data semantics, from ERwin data models only, and relies heavily on ERwin's user-defined properties and its macro scripting language. Computer Associates provided technical resources during the development of this software, and we expect to work closely with them as we market it.
Additional information about Asserted Versioning, as well as a working prototype of this product, can be found on our website, AssertedVersioning.com. We have also recorded several seminars explaining these concepts and demonstrating their implementation in our software. These seminars are available at our website, AssertedVersioning.com and from Morgan-Kaufmann at www.elsevierdirect.com/companions/9780123750419.
The authors have filed a provisional patent application for Asserted Versioning, and are in the process of converting it to a patent application as this book goes to press. The authors will freely grant any non-software-vendor company the right to develop its own temporal data management software based on the concepts presented in this book and protected by their forthcoming patent, as long as that software is for use by that company only, and is not sold, leased, licensed or given away to any other company or individual.
Acknowledgements
This book began as a bi-monthly series in DM /Review magazine (now Information Management) in May of 2006, and the series continued in an on-line companion publication for nearly three years. We want to thank the two senior editors, Mary Jo Nott and, succeeding her, Julie Langenkamp, for their encouragement and for the opportunity they gave us to develop our ideas in that forum.
Our editors at Morgan-Kaufmann were Rick Adams and Heather Scherer. They provided guidance when we needed it, but also stood back when we needed that. Their encouragement, and their trust that we would meet our deadlines even when we fell behind, are very much appreciated.
Our reviewers for this book were Joe Celko, Theo Gantos, Andy Hessey, Jim McCrory, Stan Muse and Mark Winters. They have provided valuable help, suggesting how the organization of the material could be improved, pointing out topics that required more (or less) explanation, and challenging conclusions that they did not agree with. Bi-temporality is a difficult topic, and it is easy to write unclearly about it. Our reviewers have helped us eliminate the most egregious un-clarities, and to sharpen our ideas. But less than perfectly pellucid language certainly remains, and ideas can always be improved. For these and any other shortcomings, we are solely responsible.
We would also like to thank Dr. Rick Snodgrass who, in the summer of 2008, took a couple of unknown writers seriously enough to engage in a lengthy email exchange with them. It is he who identified, and indeed gave the name to, the idea of deferred transactions as a new and possibly useful contribution to the field of data management. After several dozen substantive email exchanges, Rick concluded that our approach contained interesting ideas worth exploring; and it was in good part because of this that my co-author and I were encouraged to write this book.
Tom Johnston's Acknowledgements
Needless to say, I could not have written this book, nor indeed developed these ideas, without the contributions of my co-author, Randy Weis. Randy and I have often described our relationship as one in which we come up with an idea, and then I think through it in English while he thinks through it in code. And this is pretty much how things work with us.
As this book and our software co-evolved, there was a lot of backtracking and trying out different ways of accomplishing the same thing. If we had not been able to foresee the implementation consequences of many of our theoretical decisions, we could have ended up with a completed design that served very poorly as the blueprint for functioning software. Instead, we have both: a blueprint, and the functioning software which it describes. This book is that blueprint. Our Asserted Versioning Framework is that software.
I have had only two experiences in my career in which that think/design/build iterative cycle was as successful as I could ever have wished for; and my work with Randy has been one of them. Developing software isn't just constructing the schemas and writing the code that implements a set of ideas. Building software is a process which both winnows out bad ideas and suggests—to designers who remain close to the development process, as well as to developers who are already deeply involved in the design process—both better ideas and how the original design might be altered to make use of them. In this iterative creative process, while Randy did most of the software development, the ideas and the design are ours together. Randy has been an ideal collaborative partner, and I hope I have been a good one.
I would also like to thank Jean Ann Brown for her insightful comments and questions raised in several conversations we had while the articles on which this book is based were being written. She was especially helpful in providing perspective for the material in Chapter 1. Her friendship and encouragement over the course of a professional relationship of nearly twenty years is deeply appreciated. I also want to thank Debbie Dean, Cindi Morrison, and Ian Rushton, who were both supportive and helpful when, nearly five years ago, I was making my first attempt to apply bi-temporal concepts to real-world databases.
My deepest values and patterns of thought have evolved in the close partnership and understanding I have shared for over forty years with my wife, Trish. I would not be the person I am without her, and I would not think the way I do but for her.
My two sons are a source of inspiration and pride for me. My older son, Adrian, has already achieved recognition as a professional philosopher. My younger son Ian's accomplishments are less publically visible, but are every bit as substantive.
Randy Weis' Acknowledgements
Mark Winters and I worked closely together in the mid-90's designing and implementing a bi-temporal data model and a corresponding application based on IBM's Insurance Application Architecture (IAA) conceptual model. The bi-temporal pattern was developed to support the business requirement to be able to view the data and recreate any report exactly as it appeared when originally created, and also as of any other given point in time.
Mark was one of the key architects on this project, and is currently an Enterprise Data Architect at one of the country's leading health insurers. He has continued to be a strong proponent of using bi-temporality, and has developed a series of scenarios to communicate the business value of bi-temporality and to validate the integrity of the application we built. Mark's contribution to this work has been invaluable.
There have also been other Data Architects who have helped me develop the skills necessary to think through and solve these complex problems. Four of these excellent Data Architects are Kim Kraemer, Dave Breymeyer, Paul Dwyer and Morgan Bulman. Two other people I would like to thank are Scott Chisholm and Addison McGuffin, who provided valuable ideas and fervent support in this venture. There are others, too many to mention by name, who have helped me and taught me throughout the years. I would like to thank all of them, too.
This book would have never come to fruition without my coauthor, Tom Johnston. I wanted to write a book on this topic for several years because I saw the significant value that bi-temporality brings to business IT organizations and to the systems they design. Tom had the skills, experience and in-depth knowledge about this topic to make this dream a reality. Not only is Tom an excellent writer, he also knows how to take scattered thoughts and organize them so they can be effectively communicated.
Moreover, Tom is a theoretician. He recognizes patterns, and always tries to make them more useful by integrating them into larger patterns. But he has worked in the world of business IT for his entire career. And in that world, theory is fine, but it must ultimately justify itself in practice. Tom's commitment to theory that works is just as strong as his attraction to patterns that fit together in a beautiful harmony.
Besides Mark Winters, Tom is the only person I ever met who really understands bi-temporal data management. Tom's understanding, writing abilities and contributions to this work are priceless. His patience and willingness to compromise and work with me on various points are very much appreciated, and contributed to the success of this book. It has been great working with Tom on this project. Not only has Tom been an excellent coauthor, but he has also become a wonderful and trusted friend.
I also want to thank my wife, Marina. She has believed in me and supported me for over thirty years. Her faith in me helped me to believe in myself: that my dreams, our dreams, with God's blessings, were attainable. She was also very patient with my working late into the night. She understood me when she was trying to talk with me, and I was fixated on my laptop. She would serve me like I was a king, even when I felt like the court jester. Her encouragement helped me accomplish so much, and I couldn't have done any of it without her. My children, Matt, Michelle and Nicolle were also very supportive while I chased my dreams. I thank God for the opportunities I have been given and for my wonderful family and friends.
Finally, we would both like to thank you, our readers, the current and next generation of business analysts, information architects, systems designers, data modelers, DBAs and application developers. You are the ones who will introduce these methods of temporal data management to your organizations, and explain the value of seamless real-time access to temporal data to your business users. Successful implementation of seamless access to all data, and not just to data about the present, will result in better customer service, more accurate accounting, improved forecasting, and better tracking of data used in research. The methods of managing temporal data introduced in this book will enhance systems used in education, finance, health care, insurance, manufacturing, retailing and transportation—all industries in which the authors have had consulting experience.
In using these methods, you will play your own role in their evolution. If DBMS vendors are wise, your experiences will influence their implementation of server-side temporal functionality and of your interfaces to that functionality. If standards committees are wise, your experiences will influence the evolution of the SQL language itself, as it is extended to support uni- and bi-temporal constructs and transformations. If IT and business management in your own organizations are wise, and if your initial implementations are successful, then your organizations will be positioned on the leading edge of a revolution in the management of data, a position from which business advantage over trailing edge adopters will continue to be enjoyed for many years.
Theory is practical, as we hope this book will demonstrate. But the relationship of theory and practice is a two-way street. Computer scientists are theoreticians, working from theory down to practice, from mathematical abstractions to their best understandings of how those abstractions might be put to work. IT professionals are practitioners, working from specific problems up to best practice approaches to classes of similar problems.
Common ground can sometimes be reached, ground where the “best understandings” of computer scientists meet the “best practices” of IT professionals. Here, theoreticians may glimpse the true complexities of the problems to which their theories are intended to be relevant. Here, practitioners may glimpse the potential of powerful abstractions to make their best practices even better.
We conclude with an example and a maxim about the interplay of theory and practice.
The example: Leonard Euler, one of history's greatest mathematicians, created the field of mathematical graph theory while thinking about various paths he had taken crossing the bridges of Konigsberg, Germany during Sunday afternoon walks.
The maxim: to paraphrase Immanuel Kant, one of history's greatest philosophers: “theory without practice is empty; practice without theory is blind”.
Glossary References
Glossary entries whose definitions form strong inter-dependencies are grouped together in the following list. The same glossary entries may be grouped together in different ways at the end of different chapters, each grouping reflecting the semantic perspective of each chapter. There will usually be several other, and often many other, glossary entries that are not included in the list, and we recommend that the Glossary be consulted whenever an unfamiliar term is encountered.
Allen relationships
Asserted Versioning
Asserted Versioning Framework (AVF)
assertion table
bi-temporal table
conventional table
non-temporal table
uni-temporal table
version table
deferred transaction
design encapsulation
maintenance encapsulation
query encapsulation
event
object
thing
seamless access
temporal data
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset