Thursday, September 24, 2020

The Last Programming Language

What is the ultimate 3GL programming language?  The history of programming languages is full of attempts that fell short. I really enjoyed this 1 hour trip down memory lane by uncle Bob in which he evaluates programming languages and explains what is good about each language and where it failed. In the end he recommends a new language with a surprising set of functional lisp-like features with triples. Looking into this language further I'm impressed what it may eventually be able to do with RDF & graph DBMS and other metadata-rich environments. Enjoy the show:



Also a special-purpose language worth knowing is the Worlfram Language for one-off complex analysis.

Thursday, September 17, 2020

What is the most powerful DBMS on the planet?

To answer this question you need to look at independently audited standardized benchmarks. The benchmarks provided by the Transaction Processing Performance Council are my favorite, although there are others that are worth considering.  If a DBMS vendor does not provide such a benchmark then then I would be reluctant to agree with any of their claims of superior performance.  At one time a tech account whispered to me they had secretly done the benchmark but discovered their performance was so bad they decided to never publish it.  Even some of the correlated SQL commands within the benchmark refused to execute.  Other vendors have indicated the same.

When you look at the results of the tests today you will likely be surprised by the results.  Many of the most popular DBMS vendors no longer top the lists and winners may be some you may have never heard of.

The benchmarks I like to look at are:

The winner of TPC-C for is the Chinese Alibaba Cloud’sOceanBase yielding 707,351,007 transactions per minute!  It leaves Oracle & DB2 in the dust, and SQL Server is almost nowhere to be found (for locking reasons).

The winner of TPC-H for analytic processing EXASOL’sEXASolution yielding 11,612,395 queries per hour!  It also has the best Price/QphH.

A special mention should be given for Alibaba Cloud’sAnalyticDB.  Like General Electric’s Predix IoT cloud DBMS they are both built on Pivotal’s Greenplum open source DBMS, which is a MPP version of PostgreSQL for petabyte scale databases.  Special mention is given because they both integrate the MADLib MPP statistical utility which features machine learning algorithms at scale.  There is a lot of ROI that can be gained by doing A.I. at terabyte volume. (Most ML has to be done with datasets measure in kilobytes.)  This integrates ML right into the SQL command and generates cost-optimized queryplans that considers the cost of data movement.

There is another reason to study TPC benchmarks.  Each benchmark comes with a “Full Disclosure” document telling you what hacks they had to do to get the posted performance—and they all do hacks; no one runs the tests on a default system that has not been tweaked to the max.  So you study their disclosure so you can try the same tricks on your database!  A wealth of insight can be gained this way.  For example, no one runs a DBMS on top of a SAN. They go JBOD and carefully place their data files & partitions.

Friday, August 14, 2020

ARCHITECTURAL GOVERNANCE

 The governance of enterprise architecture is based on an understanding of the strengths of the people and technologies focused on each role & goal while also understanding the weaknesses and opportunities for miscommunication that may occur between these same people and technologies when seen from the overall business goals. In the 8 posts that follow we will answer:

  • What is an Enterprise Architecture?
  • How is an Enterprise Architecture developed and communicated, including selection of technologies, standards and design patterns?
  • What are the many competing “system qualities” that must be balanced preliminary to any determination of an architecture?
  • What are the various roles of an Architecture Team?

  1. What is Architecture?
  2. Architectural Methodology
  3. Foundational Models
  4. Non-Functional Qualities
  5. Architectural Style
  6. Architectural Layers
  7. Application Stack
  8. The Architecture Team

For a limited time the the entire document can be found at ARCHITECTURAL GOVERNANCE - Governance, Plan, Methodology, Requirements & Design

What is Architecture?

Welcome to the Data Architecture World blog. Every blog has to start somewhere so lets start by setting definitions. 

ANSI/IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-Intensive Systems, provides the following definition for architecture:
Architecture is defined by the recommended practice as the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution. This definition is intended to encompass a variety of uses of the term architecture by recognizing their underlying common elements. Principal among these is the need to understand and control those elements of system design that capture the system’s utility, cost, and risk. In some cases, these elements are the physical components of the system and their relationships. In other cases, these elements are not physical, but instead, logical components. In still other cases, these elements are enduring principles or patterns that create enduring organizational structures. The definition is intended to encompass these distinct, but related uses, while encouraging more rigorous definition of what constitutes the fundamental organization of a system within particular domains.
Here is my definition for Enterprise Architecture:
Enterprise Architecture is the set of decisions that must be made at the enterprise level before specific applications are designed and built in order to provide conceptual integrity and sanity across the enterprise’s systems. Architecture includes a decomposition of the systems into separate orthogonal viewpoints along with the enforced rules that enable this clean decomposition and isolation of design viewpoints. This is done so functional (application requirements) and non-functional (system qualities) and other aspects of the application system may be defined and built by independent specialists in their separate fields with minimal interference from each other. An architecture not only divides the system, it also divides the roles and responsibilities of those who work with the system into separate organizational concerns and disciplines that are conceptually tractable and can be effectively managed. An architecture must also incorporate trade-offs, aware that attempts to optimized every desired feature is not possible and can result in systems that do not function effectively.
Benefits of Good Architecture 
  • Better Comprehension 
  • Division of Labor 
  • Greater Reuse, Less Redundancy 
  • Greater Consistency across the Company and Industry due to Open Standards 
  • Local Optimization of Orthogonal or Layered Code Leads to Global Performance Improvement 
  • Possible Performance Gains Through Parallelism
What Constitutes Good Partitioning (separation of concerns)?
  • Encapsulates common underlying technologies and design decisions 
  • Clear Semantic Description 
  • Minimal Coupling between Partitions 
  • May Deploy on Different Systems 
  • Can Tolerate Changes in Design Paradigms and Technologies at Different Levels
In all this an important principle is:
“Improving the development team’s ability [to make independent decisions] gives an architect much greater leverage than being the sole decision maker and thus running the risk of being an architectural bottleneck. This leads to the satisfying rule of thumb that an architect’s value is inversely proportional to the number of decisions he or she makes.” — Martin Fowler

Architectural Methodology

 It is important that you do not choose your architecture by going with the popular buzz-words of the day. Determining the architecture that is right for for your enterprise requires a set of iterative steps that starts with understanding your actual requirements and making choices that fit the requirement. Each decision sets the stage for the next decision and in some cases trade-offs that must be made in the next decision can cause you to reevaluate the priorities of your prior decision and thus you iterate. 

The most fundamental decision that must be made is a ranking of the nonfunctional qualities that the systems must support (e.g. availability, security, non-repudiation, etc.). From this follows a determination of open standards and design patterns (frameworks) that will be incorporated into future applications. Next the vendors and products are selected that support the open standards. The set of products must integrate well with each other and align their API Stack to support the design pattern of the selected Application Stack (e.g. n-tier with clearly defined nodes). The tools used by the developers are selected to code, test and debug the application, and then the tools used to design the application is selected with the goal of using clear models to generate clean code for rapid application development (RAD) and model-driven architecture (MDA). The modeling process must conform with the selected development methodology for designing robust code that conforms to the business requirements and management principles. 

Requirements, models, code and test data is kept in a version-controlled Reuse Library that supports traceability and impact analysis showing precise time & budget slippage of a change in requirements. When complete, the above process may iterate to take advantage of low-hanging fruit where it is found that a product has a particularly good synergy with other products. These factors that influence application and development architecture can be show as: 


Summary 
  • Determine required Architectural Qualities, and Rank them 
  • Determine the Standards, Design Patterns & Framework that support these Qualities 
  • Select the Technologies & Products that unite to build this Framework 
  • Select the Development & QA Tools that best work with these Technologies 
  • Select Design Tools that feature RAD Generation and Round-Trip Engineering with the Development tools

Foundational Models

To be useful to the developer the decisions made in the prior Architectural Methodology should be converted into 3 artifacts that are delivered to the program team.

  • Application Stack Diagram that depict recommended & approved technologies, frameworks and products in support of corporate standards.
  • Reference Models & Code that depicts how to use the recommended design patterns. This can be provided in the form of UML models, and a “Hello World” application that uses the entire stack which may be used as a copy-and-paste starting point for developers.
  • Enterprise Models that depicts all the applications in the enterprise and the interfaces between applications new & old.
Having the above artifacts a developer can be assigned one of the new applications depicted in the Enterprise Model and will code it so it correctly interfaces the other applications. The developer will use the tools selected in the Application Stack Diagram that incorporates the Architectural Style and design patterns and will proceed from the "Hello World" application to ensure all the non-functional requirements are met.


Non-Functional Qualities

The complexity of an application’s functional requirements is not the only driver of system cost. Often cost and complexity is driven by system features that are behind-the-scenes and distributed across all functions of an application, such as “high availability”. These non-functional requirements have a major impact on the architecture of a system. As it is not possible to satisfy all requirements simultaneously trade-offs must be made. Consideration of the selection and weight of each of these qualities is provided in the following table.

I recommend you consider each of the NFRs shown below and rank them in order of importance to your enterprise. Each NFR is ranked with a range that goes from a “0” represents a quality that no effort is to be spent on—to—a “10” representing a quality that must be included even if it means additional cost and significantly delaying the release of the system. A “1” represents an optional, nice-to-have, feature. There should be only a few qualities set at “9” or “10”.

Business Qualities

Affordability:  The ability to build the system with minimum development cost in labor and tools.
Time-to-Market:  The ability to build the system in the minimum amount of time and ahead of competition for the similar functionality.
Functionality:  The ability of the system to perform all the business tasks it was created to do per the user’s requirements.
Regulatory Compliance:  Adheres to governmental regulations.
Buildability:  Whether or not the architecture can reasonably be built using the budget, staff, and time available for delivery of the project.  Buildability is an often-overlooked quality attribute.  Sometimes, the best architects are simply too ambitious for a project team to complete given project constraints and environment.  The design that casts a solution in terms of well-understood concepts is more buildable than a design that introduces new concepts.

User Qualities

Usability:  How easy it is for the user to understand and operate the system.  Usability can be broken down into the following areas:
  • Learnability: How quick and easy is it for a user to learn to use the system's interface?
  • Efficiency: Does the system respond with appropriate speed to a user's requests?
  • Memorability: Can the user remember how to do system operations between uses of the system?
  • Error avoidance: Does the system anticipate and prevent common user errors?
  • Error handling: Does the system help the user recover from errors?
  • Satisfaction: Does the system make the user's job easy?
Gratification:  The ability to anticipate the needs of the user and give more than is asked.  Most systems give only what the user specifies and requires the user to expend significant effort in stating the request.  A system with gratification learns the user’s habits and preferences, and intervenes with quick solutions or alternatives when the user does not appear to making progress.
Localization:  The ability easily adapt to multiple languages and locales.
Personalization:  The ability of a user to adapt the look and feel of the system to their own taste.
Customer Compliance:  Adheres to the interfaces, data structures and conventions of the customer’s systems without requiring modifications of their systems.
Accuracy:  Ability to provide the right or agreed results or effects with the needed degree of precision.  Includes the ability to resist or correct poor data quality.
Mobility:  Ability to support mobile devices and cellular communications.

Administrator Qualities

Manageability:  Can be expressed in terms of how easy it is to monitor a system and detect operational characteristics related to performance and failures, how easy it is to configure systems, the processes used for effecting this control, and the degree to which the system can be managed remotely.
Administrability:  Can be expressed in terms of how easy it is to configure an application and detect application characteristics related to functional usage, how easy it is to configure the application, the processes used for effecting this control, and the degree to which the application can be administered by multiple parties in cooperating roles.
Data Location Transparency:  The business logic layer of the application does not know the physical location of the data.  The data may be distributed to multiple locations at sites determined by the DBAs and administrators for tuning, fail-over, and storage resource control.
Component Location Transparency:  A client does not know where a target server object resides. It could reside in a different process on another machine across the network, on the same machine but in a different process, or within the same process.  Different components can be distributed over multiple machines, or, copies of the same component can be distributed over multiple machines.
Implementation Transparency:  A client neither knows how a target object is implemented, what programming or scripting language it was written in, nor the operating system and hardware it executes on.  This is also a security issue.
Hardware Virtualization:  The application may be moved unchanged as compiled from the testing to the operational environment and across multiple hardware environments.  VMware is an example of such an approach.
Object (Server) State Transparency: When a client makes a request on a target server object, it does not need to know whether that object is currently activated (i.e. in the executing state) and ready to accept requests or not. The ORB transparently starts the object if necessary before delivering the request to it. This feature greatly helps recovery handling for distributed objects.
Communication Mechanism Transparency: A client does not know what communication mechanism (e.g., TCP/IP, shared memory, local method call, etc.) the ORB uses to deliver the request to the object and return the response to the client.  Multiple communication strategies may be statically or dynamically set.
Security:  The ability of the system to resist unauthorized attempts to access the system and denial-of-service attacks while still providing services to authorized users.  Includes levels of authentication supported, granularity of authorization controls, and techniques utilized to ensure the integrity of resources.
Auditability:  The ability to ensure that the previous system states can later be reconstructed and observed.  Includes recording and analyzing activities to detect intrusions or abuses.
Accountability:  The ability to ensure that the ultimate causes of the previous system states can later be identified.  The ability to know who did what.
Confidentiality: keeping information secret only to those who are authorized to see it.
Anonymity: concealing the identity of an entity involved in a process.
Privacy:  Ability to follow business rules in regard to the rights and responsibilities that govern the acquisition, disclosure, and use of personal information.
Data Integrity: ensuring information has not been altered by unauthorized means.
Data Authentication: corroborating the source of data
Access Control: restricting access to resources to authorized entities.
Non-Repudiation:  The ability to provide legally accepted proof that a user executed a particular transaction, even if they deny having done it.  Must be able to discriminate against spoofing and man-in-the-middle attacks.

Qualities of Service

Quality of Service (QoS):  The ability to measure and stay within the guaranteed performance and availability limits for contracted services per SLA.
Scalability:  The ability for the system to grow linearly by adding hardware as the number of users and transactions increase beyond initial implementation.  For high scalability you typically need:
·       Asynchronicity
·       Statelessness
·       Parallelism
·       Pipelining
·       Location Transparency
·       Load Balancing
·       GUIDs
Performance:  A specification of the workload and the latency or throughput requirement. The form of the specification will depend on the type of system.  In an interactive system, the form of the specification might be an abstract specification of the number of users and a deadline for response; in an embedded real-time system, the form of the specification might be a characterization of the input events and an associated deadline.
Responsiveness:  A measurement of the system response time for a functional requirement.
Availability:  The amount of time that the system is up and running.  It is measured by the length of time between failures, as well as by how quickly the system is able to restart operations after a failure.  For example, if the system was down for one day out of the last twenty, the availability of the system for the twenty days is 19/19+1 or 95 percent availability.  This quality attribute is closely related to reliability.  The more reliable a system is, the more available the system will be.  The rule of thumb is that each additional “9” of availability raises system cost ten-fold.
Survivability:  The ability to reestablish operations after a catastrophic event that leaves all existing systems unusable.
Reliability:  The ability of the system to operate over time.  Reliability is measured by the mean-time-to-failure of the system.
Recoverability:  The ability to resume operations after a hardware or software fault that halts the current systems.  Is expressed by: 1. Capability to reestablish the level of performance. 2. Capability to recover the data. 3. Time and effort needed for it.  Measured by Mean-Time-To-Repair.
Nomadicity:  Having components (or mobile agents) that can survive relocation of the component and/or automatically discover alternative living peer components in the event that the peer components it was communicating with fail, or can continue to work without interruption should related components fail.  The DNS internet, Microsoft Outlook/Exchange, and cellular phone system are examples of nomadic systems.  Service Oriented Architecture (SOA) using UDDI can easily be made nomadic.
Autonomic Computing:  Autonomic computing derives its name from the autonomic nervous system and denotes its ability to free our conscious brains from the burden of dealing with the vital, but lower-level, functions of the body. As used in the software industry, autonomic computing refers to self-managing systems that are comprised of four core characteristics:
·       self-configuration,
·       self-healing,
·       self-optimizing, and
·       self-protecting.
Transactionality:  The ability to commit a unit of work or, in the case of an error, roll-back the data across all tables and stores.  To be transactional all of the ACID constraints (qualities) must be met:
·       Atomic: all or nothing
·       Consistent: logically correct transformation
·       Isolated: no concurrency bugs
·       Durable: survives failures

Software Life-Cycle Qualities

Evolvability:  The ability of a system to change over time with minimum phased cut-over impact.
Composability:  The ability to sell the system as separate components that can work on their own or together in harmony.
Extensibility:  The ability to easily add new data and behaviors to existing code by the developers.
Tailorability:  The ability easily adapt the system to the needs of a specific customer.  Includes the ability to re-brand the user interface.  Minimizes the effort needed to maintain separate code bases.
Adaptability:  The ability to define business processes, business rules and refine data types by an application administrator dynamically, without needing code revision.
Maintainability:  The measurement of how easy it is to change the' system to incorporate new requirements.  The two aspects of' modifiability are cost and time.  If a system uses an obscure technology that requires high-priced consultants, even though it may be quick to change, its maintainability can still be low.
Portability:  Measures the ease with which the system can be moved to different platforms.  The platform may consist of hardware, operating system, application server software, or database server software.
Reusability:  The ability to reuse portions of the system in other applications.  Reusability comes in many forms.  The run-time platform, source code, libraries, components, operations, and processes are all candidates for reuse in other applications.
Integrability:  The ability to make the separately developed components of the system work correctly together as a whole.  This in turn depends on the external complexity of the components, their interaction mechanisms and protocols, and the degree to which responsibilities have been cleanly partitioned, all architecture-level issues.  Integrability also depends upon how well and completely the interfaces to the components have been specified. Integrating a component depends not only on the interaction mechanisms used (e.g., procedure call versus process spawning) but also on the functionality assigned to the component to be integrated and how that functionality is related to the functionality of this new component's environment.   
Interoperability:  Integrability measures the ability of parts of a system to work together; interoperability measures the ability of a group of parts (constituting a system) to work with another external system. The integrability of a system depends on the extent to which the system uses open integration standards and how well the API is designed such that other systems can use the components of the system being built
Testability:  The ease with which software can be made to demonstrate its faults through testing (probability that the software will fail on its next test, given that it has at least one fault).  How easily the system can be tested using human effort, automated testing tools, inspections, and other means of testing system quality.  Good testability is related to the modularity of the system.  If the system is composed of components with well-defined interfaces, its testability should be good. 
Variability:  How well the architecture can handle new requirements.  Variability comes in several forms.  New requirements may be planned or unplanned.  At development time, the system source code might be easy to extend to perform new functions.  At run-time, the system might allow pluggable components that modify system behavior on the fly.  This quality attribute is closely related to modifiability.
Subsetability:  The ability of the system to support a subset of the features required by the system.  For incremental development, it is important that a system can execute some functionality to demonstrate small iterations during product development.  It is the property of the system that allows it to build and execute a small set of features and to add features over time until the entire system is built.  This is an important property if the time or resources on the project are cut.  If the subsetability of the architecture is high, a subset of features may still make it into production.
Conceptual Integrity:  The ability of the architecture to communicate a clear, concise vision for the system, also know as Architectural Style.  Fred Brooks writes, “I am more convinced than ever. Conceptual integrity is central to product quality. Having a system architect is the most important single step toward conceptual integrity…. After teaching a software engineering laboratory more than 20 times, I came to insist that student teams as small as four people choose a manager and a separate architect” (Brooks 1995).  Kent Beck believes that metaphors are the most important part of the eXtreme Programming methodology (Beck 1999).  The metaphor is a powerful means of providing one or more central concepts for a system.  The metaphor provides a common vision and a shared vocabulary for all system stakeholders.  The metaphor provides a means to enforce conceptual integrity.  When the design of the system goes outside the bounds of the metaphor, the metaphor must change or new metaphors must be added; otherwise, the design is going in the wrong direction.  If any of these design decisions are made without the concept feeling right, the conceptual integrity of the system will be lost.  Sometimes the system metaphor is an architectural pattern, such as MVC or Blackboard.  These architectural patterns provide a common metaphor for system developers or others who understand the patterns. 
Non-obsolescence:  If the system is intended to have a long lifetime, modifiability and portability across different platforms become important, as well as rejection of technologies that may become obsolete and vendors that may go out of business. Building in the additional infrastructure (such as a portability layer) to support modifiability and portability will usually compromise time to market.  On the other hand, a modifiable, extensible product is more likely to survive longer in the marketplace, extending its lifetime.
Installability:  The capability of the software product to be installed in a specified environment.
Co-existence:  The capability of the software product to co-exist with other independent software in a common environment, sharing common resources.
Replaceability:  The capability of the software product to be used in place of another specified software product for the same purpose in the same environment.
Standards Compliance:  Adheres to external open standards.

The following qualities are absent from this list and may be added later if needed: flexibility, understandability, deployability, configurability, degradability, accessibility, demonstrability, footprint, simplicity, stability, timeliness, schedulability, openness, seamlessness, safety, trust, Error, Multi-Undo, feedback, empowerment.  A partial list and discussion can be found on Wikipedia.

Architectural Style

Architectural style has emerged as a common means for specifying the highest-level principles underlying the organization of software systems.  It was developed at Carnegie Mellon by Mary Shaw and David Garlan and appeared in the book titled “Software Architecture: Perspectives on an Emerging Discipline.” (ref.)  An architectural style defines systems in terms of a pattern of structural organization.  More specifically, an architectural style determines the vocabulary of components and connectors that can be used in instances of that style, together with a set of constraints on how they can be combined.  Examples of architectural styles are:
  • Blackboard
  • Client–server model (2-tier, n-tier, cloud)
  • Database-centric
  • Distributed computing
  • Event-driven architecture (Implicit invocation)
  • Front end and back end
  • Monolithic application
  • Peer-to-peer
  • Pipes and filters
  • Plug-in
  • Representational State Transfer
  • Rule evaluation
  • Search-oriented architecture
  • Service-oriented architecture
  • Shared nothing architecture
  • Software componentry
  • Space based architecture
  • Structured (Module-based)
  • Three-tier model
Architectural styles differ in how componentization is implemented.  In defining your style there should be absolutely no ambiguity on these points:
  • Identity - how a component (or object) is identified and referenced
  • Behavior - how a component supports a well-defined behavior and provides an interface to it
  • Realization - how a component comes to exist and maintain its behavior and identity
Care must be taken to select an architectural style that best supports an application’s non-functional requirements.  A style should not be selected because it is the current favorite buzzword.  For example, Service Oriented Architecture (SOA) using WS-* technologies is good for open loosely-coupled integration to external companies using short-term small-packed request/response interactions but is poor for an embedded integrated set of applications that run within a company. 

An application should apply a style consistently throughout but may in some cases be muddied due to a need to interface to system that uses another style (e.g. a C/S system that needs to interface a MOM system).  The large-scale structure and fine-structure style may have differing styles.  All these situations should be clearly identified.

Considerable research into the alternatives should go into the choice of an application’s architectural style as it is the most important decision in regard to the design of the system.  A clear statement should be made defining the overall architectural style that is to be applied to the system along with the supporting rationale.  An example of this would be:
The large-scale structure of the system is built on stateless components, oriented around a uni-directional dataflow paradigm, on top of an asynchronous publish/subscribe message infrastructure using queues at the publish site, not a central hub.  Each component is loaded at system startup and remains in existence till system drain & shutdown.  An operations management component shall monitor the state & workload of each component and will take appropriate automated action as needed or escalate to an operator.  At the same time, the fine-scale structure of the system is based on an object-oriented language with code that employs inheritance, abstract classes, interfaces, object references, and design patterns.

Architectural Layers

 Application Architecture is an abstract system specification consisting primarily of functional components described in terms of their behaviors and interfaces and component-component interconnections. 

Each layer in the stack serves as an abstraction to the layer above it. Each layer on the stack must have a set of well defined interfaces that are on the same level of abstraction (that serves some specific purpose). Each layer is responsible for its own sanity check, and typically trusts each of the other layers to do the same unless the data is coming from an external source. Not every layer is used in every transaction. For example, a simple HTML interaction only goes down through the top several layers and then returns. Batch loading from a legacy system may skip several layers. However, if everyone develops their application along a layered architecture then they only have to worry about communication between the layers immediately above and below them. If their layer is sometimes skipped then they must provide a transparent module so the design rule that layers only interact with the layers immediately above and below them remains in effect with few exceptions. Specifying a sequence for top to bottom control flow prevents the problem of cyclic deadlock among objects. Implementation via purchased software packages (even crossing multiple layers) is permitted as long as “wrappers” are coded that can integrate the packaged within the overall architecture and as long as essential administrative control features are present for each layer (e.g. scalable configuration).

Breaking the application architecture into a many layered onion facilitates the adaptability of an application while adding to its complexity. Here is an example that calls out most of the possible layers, starting from the customer interface down to the persistent storage. 

System Layers:

Operations Management
Initialization, Restart, Scalability, Monitor, Fail-over, Accountability
(Communicating Objects)
The below application layers go here
Component Middleware
Messaging, Load Balancing, Reusability, Interoperability
Directory / Security
DNS, LDAP, X.500
Operating System, 
Network Protocol
System Administration
Hardware, Wire, Disks


Application Layers:

Customer
Customer Applications
 & Applets
Browser, Java applet, Legacy system

Communications
HTTP, ftp, telnet, 3270, EDI, VPN, fax

Access
Security, Certification, Firewall, Encryption, Session Closure, Connection Context
Presentation
Native Presentation
Layout XML to: HTML, Character, Record layout, EDI (via Templates+Rules), Validation

Generic Presentation
Semantic XML to Layout XML
Transaction
Workflow Processing
Commerce Events, State Table, Workflow, Job Initiation, Temporal & Priority Control

Application Context


Application Logic
Custom application-specific code goes here
Data
Business Object
Metadata markup, DB Data«Semantic XML,

Semantic Navigation
Ontology, Context, KQML, Semantic Undo

Enterprise Schema
Data Dictionary, Referential Integrity

Data Distribution
Heterogeneous navigation, Scale Redirection, Legacy Redirection

DB Transaction
Stored Procedures, ACID

Data View
SQL, Dataset

Physical Data Stores
Index/Cluster Architecture, Bloom Filter, Encryption, RAID