Background

Beginnings

One of my first managers always said that “computer science” and “software engineering” were misnomers. His view was that there was no “science” involved in computer science, and definitely not “engineering”. Others have had this view as well, notably Michael Davis (Davis 2011, 32), and only recently has some of the science in computer science been described and acknowledged (Denning 2013, 35). This struck a chord with me and I wondered for many years why so many information technology (IT) projects fail, why we spend sometimes millions of dollars only to end up with systems that either did not work properly or do not meet requirements, or both.

The book is the culmination of more than thirty-five years of experience and study of IT and its application. Its primary objective is to describe a more rigorous approach to IT architecture that brings science and engineering disciplines into the process where possible. It also unifies IT architecture—the same IT architecture process and deliverables may be applied anywhere within the enterprise (or even across enterprises) with all architecture definitions being useful and relatable.

This naturally requires some discipline on the part of IT architects and developers to ensure that these separate IT architectures remain consistent and compatible; therefore, clear definitions of techniques and deliverables along with a methodology for their development is needed—the Unified Architecture Method (UAM) described in this book. The complete methodology (minus the Logical and Technical Level Profile Languages) is published in the form of a web site, and is available online at:

unified-am.com/UAM/

Now some background on information technology, how it has evolved and some key definitions that aid in understanding the reasons for the structure and approach to IT architecture as defined in the UAM.

Evolution of IT Development

Information technology, as it is now known, is a comparatively young field. Electronic computing began in the 1940s, but the use of computers for business purposes did not happen in earnest until the 1960s. The programmability of computers evolved along with the hardware, with high-level languages becoming the norm.

Hardware and software systems became more and more complex over time. To deal with this complexity a number of tools and techniques were developed, such as:

  • High-level languages;
  • Standard algorithms such as Quicksort;
  • Database design tools.

Many other techniques, algorithms, and tools were developed through the 1980s—then came the Internet and pervasive networking. Not only were home computers connected to the Internet, businesses used networking to advantage internally and externally, along with all of the standard office tools and software, such as:

  • Web browsers;
  • E-mail;
  • Newsgroups, blogs, and other information sharing tools;
  • Distributed, grid, and cloud computing.

The advent of network-based computing dramatically changed IT forever. Systems are now connected at all layers, from network connectivity at the physical and transport layers to peer-to-peer interactions at the application layer, and everything in between. These layers and connectivity have substantially increased the level of complexity and the size of interconnected systems since the 1950s and 1960s. How does this relatively rapid evolution of IT in recent years, this “computing revolution”, compare to the Industrial Revolution? Can we learn anything from this comparison? Most definitely, with the principle areas being:

  • Discovering and developing related science and engineering;
  • Maturity of tools, techniques, development, and implementation ; and,
  • The discipline required to rigorously define and develop proper solutions.

The Industrial Revolution had a profound effect upon business and industry, which is mirrored in many ways in the current “computing revolution”. A comparison of the two provides further insights into IT and the tools and techniques that are important for its use and exploitation.

Industrial Revolution vs. Computer Revolution

The Industrial Revolution did not reach fruition until the widespread introduction of electricity and the electric motor in the late 1800s and industry was further revolutionized by the assembly-line concept for factories, and other improvements, in the twentieth century. Therefore, one could argue that the Industrial Revolution took 150 to 200 years to reach a refined state of maturity—maturity of techniques, processes, tools, and technology.

What tools and techniques enabled the continued advancements during the Industrial Revolution? In a nutshell those provided by science and engineering. The development and maturing of these disciplines, in essence, captured the knowledge of materials, chemistry, tools, and techniques in a way that made them accessible and useful for the next wave of engineers. This body of knowledge was continually refined and built upon to the point where the engineering approach to problems was developed. These engineering disciplines and the associate bodies of knowledge were fully recognized and utilized.

On the other hand, the Computing Revolution, being only about 60 to 70 years old should be viewed as being quite immature in terms of techniques, tools, and technology. Compared to the Industrial Revolution time lines, the Computing Revolution is in the early stages of development of the techniques needed to implement improved solutions for the information age. What further parallels can be drawn with the Industrial Revolution?

As noted, the huge success and impact of the Industrial Revolution resulted from the capture and refinement of knowledge, as well as the development of a disciplined engineering approach. Similar steps were taken in the Computer Revolution with the refinement of techniques, processes, tools, and technology. What has to happen is the emergence of a disciplined approach similar to the engineering approaches developed during the Industrial Revolution. Some would argue that the Computer Science or Computer Engineering disciplines fill this void. This is only partially true because more science and engineering is needed.

A mathematical foundation is another characteristic of the engineering disciplines developed during the Industrial Revolution. Mathematics was used to validate the design and to develop new proven approaches, resulting in a reduction of risk and very high success rates. Some mathematical rigour is used in IT development, but a greater acceptance of the need for rigor and a more consistent, disciplined approach to problems and the definition of solutions are required.

What further refinements are required in the IT context similar to a civil engineering example? In the industrial context nothing happens until a clear (and formal) agreement is obtained on the requirements and the overall concept and approach. This is followed by preliminary designs and then detailed design and engineering, with approvals being obtained all along the way. A lot of work is done prior to breaking ground. Therefore, in the IT context a similar rigorous approach is needed, regardless of the context and size of the project—architecture is required, regardless of the system development approach used. This is what UAM does for IT architecture and design. It defines the approach (i.e., the methodology) and the tools and techniques (i.e., the viewpoints and associated modelling languages) required for a more rigorous definition of solutions.

Architecture vs. Engineering

A final important question is what is architecture vs. engineering? In the IT context, where are each of these disciplines applied, or are they slightly different in the IT context versus the industrial context? Let us formulate precise definitions of architecture and engineering.

Architecture is a design discipline that concerns itself with aesthetics as well as utilitarian considerations—architecture is the balance of form and function in meeting the requirements.

Engineering is “the application of scientific principles to the optimal conversion of natural resources into structures, machines, products, systems, and processes for the benefit of mankind”. “Scientific principles” refers to the rigour involved in the engineering discipline, which includes mathematics, chemistry, and material sciences, among many others. Each engineering discipline has an associated “great body of special knowledge”. Engineering has a very strong emphasis on function and takes a scientific approach, and therefore is viewed as a more specific definition of design that is very close to implementation. Therefore, engineering is a design discipline that concerns itself mostly with utilitarian considerations backed by scientific principles—function is the key.

Which discipline is best applied in the context of IT systems? Both—an overall plan (architecture) is definitely required, especially for large IT systems or new IT systems, but engineering rigour is required in order to achieve the desired result in an efficient and cost effective manner with some assurance of success.

Architecture is about balancing form and function, the two are distinguishable but not separable—they are closely related and influence each other. The following definition for IT architecture is an extension (extensions in italics) of the definition of architecture in ISO 42010 (ISO 2011, 2).

IT Architecture is the fundamental concepts or properties and organization of a system in its environment (i.e., in context) embodied in its elements that are almost arbitrarily arranged, their relationships, and in the principles guiding its design and evolution—defining the form (of elements) is the main objective, but balancing form and function is key.

On the other hand, engineering is more restrictive, and generally starts after the architecture is defined (i.e., the component parts of a bridge, and their relationships, are defined for example: piers, abutments, spans, etc.); therefore, one engineers a bridge or a building after its architecture is defined. Therefore, engineering is defined as follows:

Engineering is a more restrictive design activity where the components and their relationships are already defined (i.e., the architecture is defined) and now scientific rigour and precision is applied to ensure success—function is key.

Why UAM?

A big question at this point may be “why define yet another methodology”? There are many methodologies currently available; however, there are a number of fundamental problems with all of these existing approaches:

  • They address only the Enterprise Architecture (EA) level;
  • The models required by these methodologies are often arbitrary;
  • Modelling languages are not defined or are ill-defined resulting in even more inconsistency and arbitrariness;
  • Architectural Decisions, the analysis and documentation of decisions, are not well addressed if at all in other approaches;
  • The level of coverage of the “system” being architected is unspecified or unknown;
  • The maintenance phase of the system life-cycle is often missing;
  • These methodologies are quite inflexible and in-adaptable to the needs of the enterprise.

A brief comparison of UAM with the main methodologies currently in use is contained within UAM: UAM vs. Other Methodologies

UAM may be applied to any context or situation, and if done wisely, the resulting architectures will directly relate to and complement higher or lower level architecture efforts through the notion of fractal architectures. The clear separation of concerns in UAM along with levels of abstraction and encapsulation facilitate this unified, integrated and connected approach to IT architecture within the enterprise. The IT security concepts in UAM (Authority, Domain, Zone, etc.) not only address the need to architect-in security, but they also facilitate managing complexity and defining architecture fractals through a hierarchical approach to the definition of Domains and Zones. Finally, the notion of Location and People aspects are not well represented in other approaches to IT architecture, resulting in incomplete architectures.

Existing methodologies have confusing frameworks, with little in the way of an underlying metamodel and logical approach to the specification of their models. The one possible exception is TOGAF (The Open Group Architecture Framework), but even it does not have a very robust underlying metamodel nor logical approach to the definition of artifacts for the various stakeholders involved.

In summary, UAM provides a more logical approach to the definition of IT architectures, including their usage and maintenance in the long term. Completeness of the architecture is also ensured through this logical approach and supported by the well-defined framework. Finally, UAM may also be applied within any context within the enterprise, from the complete enterprise itself (i.e., EA) down to individual sub-organizations or systems.

Goals of UAM

UAM may be viewed as the merging of (a modified subset of) The Zachman Framework, Model Driven Architecture (MDA), Business Process Model and Notation (BPMN), the Unified Modelling Language (UML), and other standards into a complete set of modelling languages along with a methodology that addresses the following:

  • Applicability and alignment at and between all architectural levels: Enterprise; Business Line; Business Department; Business Division; Business Unit; Component. Unified architectural models through the definition of: Business, Logical, and Technical perspectives (levels); Data, Activity, Location, and People modelling
      • aspects;
      • Separation of concerns into twelve viewpoints (models);
      • Standardized and interrelated viewpoints;
      • Consistent and standardized language for viewpoints;
      • Standard architectural language and concepts within and between viewpoints;
      • Consistent and related architecture definitions (i.e., models at different enterprise levels are IT architecture “fractals”).
    • The transformation of business concepts and concerns into IT concepts and concerns layer by layer, providing traceability back to business needs;
    • Support better decision-making, with persistence (i.e., preservation of corporate knowledge and corporate decisions);
    • Consistent management of complexity through encapsulation and decomposition;
    • Support a Model-Driven Architecture (MDA) approach—MDA and the Model-Driven Enterprise (MDE) (OMG 2001);
    • Stakeholder-friendly and understandable models targeting their different concerns;
    • Coherent ways for using IT architectures through the notions of Language, Blueprint, Decision, and Literature (Smolander, Rossi and Purao 2005);
    • A rigorous and repeatable architecture methodology.

Another important goal was to not reinvent the wheel. Every effort was made to use existing standards, either industry or defacto. ISO 42010 and BPMN are central to UAM along with The Zachman Framework. MDA concepts, the Information Technology Infrastructure Library (ITIL), and other standards also influenced the definition of UAM. MDA is important since a basic principle is that models are the system, they are not throw-away.

Also see the overview of UAM: UAM Overview