In composing a data model, the structures are put together thoughtfully and with intention. Data structures emerge from the application of semantics germane to the universe-of-discourse and filtered through the rules of normalization. Each table expresses meaning, with columns that are self-evident. The best models reduce the data items within the covered subject area into obvious arrangements. However, this simplicity often confuses observers, persuading many that modeling itself must therefore be a simple task. DBMS tools often incorporate wizards that facilitate the quick definition of tables which are then immediately created within the chosen database. These tools enable the developer to build tables in the blink of an eye. At times some prototypes are approached in this fashion, and while this provides for placeholders, such slapped-up table structures are insufficient for an industrial strength solution. Under these instant-table setups, developers often have no problem reworking large sections of code for minor changes, or misusing data elements to mean many things while making the determination of meanings at a specific point-in-time less clear. Unaware of these less-than-stellar consequences, users become confused; they often wonder why modeling tasks should ever need to be done because the proof of concept worked, didn't it?
Data modeling is important and it is a task that must be done. The specific meaning of each data item is always important enough to document. Everything is not consistently as obvious as a developer may think it is while programming. Modeled structures convey meaning; and when modeling is not done, much of the meaning can end up residing inside the code. Developers often have a level of myopia, assuming their code is the only thing that will ever need to use a given data item. Ultimately, when approached and designed systematically, the resulting structures will provide an ease of scalability, as extensions and additions emerge from logical growth points.
If sufficient time is not spent properly designing the database before implementation, then more time will be spent post-implementation resolving issues arising from an application built upon fragile and inflexible designs. The problems are multiplied when other applications must interface with the original application. Such interfaces may need constant rework in attempting to untangle meaning from arbitrary and ever-changing data items presented in the original non-designed system. Sadly, time is not always the reason inadequate efforts are expended on modeling. Simple tables truly can be designed speedily and easily. In stable businesses, the data is not ever-changing; it remains a known quantity, and data reuse in new and enhanced applications should become something modeled effortlessly. When these conditions exist and modeling is still left undone, the causes can only be attributed to bad habits within the development group. However, when data is new or not well understood, then modeling efforts can take time as the meaning and use of these new data elements materialize through the processes used for data discovery.
When does a data model not matter? When is it appropriate to take some short cuts, and in that process simply create a table for storing data without much consideration of its context? If one is dealing with a true black box, a one-off application that does not need to interact with anything, does not contain any information to be shared, and the one person developing it is the only one who will ever maintain it (or it will never need to be maintained), then under these circumstances it is possible to simply go for structures that are convenient and optimized for the developer. Otherwise, one should always plan on designing data structures that are meaningful, accurate, and useful.