In “Software Design Challenges” by Carlos Otero, there is the assumption that software design requires “engineering” and the process itself is a “project”. These challenges are an artifact of an industry filled with misinformation and concision. The fact is that software design is driven by decisions made in a constantly changing environment, by accountable people with authority who understand quality improvement is a continuous process. Software design actually begins with data design that defines “what“ needs to be entered, edited, searched, and sorted, before deciding “how” users interact with that data. Understanding “what” before “how” is the biggest challenge facing software design.

1) Design is Driven by Decisions
If I told you how to make pancakes using flour, sugar, milk, and eggs, it would not take long to teach you how to make waffles or pizza. But if I told you that you could only make the best pancakes using my strain of flour, and I sold you the rights to use my flour, sold you the books and classes on mixing the ingredients, and made a profit mostly by making the pancakes for you, then I would essentially be in the software and professional services business.
Design challenges in software are not so much about the art as they are about the artist. Satisfying the requirements of various “stakeholders” is like herding cats. When the “budget” (price) and the “deadline” (service) compete with the “features” (quality), an interesting dynamic takes place. Successful software development projects allow priorities of function to overcome priorities of form. Projects fail for many reasons, but usually leadership fails to delegate authority or assign responsibility until the costs cripple the task. Decisions, triage really, are the soul of successful software development projects.

2) The Only Constant is Change
Information systems need to adapt to changes as rapidly as business processes change, which includes during the design, development, and deployment phases of software projects. For many businesses, processes do not change very often. Software they use is too costly to adapt, requiring re-engineering or replacement, rather than evolving to accommodate gradual adjustments to changing needs. So the needs accumulate over time, until they reach a critical mass that justifies the cost of re-engineering.
This problem of creating software that can adapt to change has been addressed by an increasingly complex set of software engineering principals, codified in various products, but seemingly usable by only a few top scientists and consultants in the field. The average programmer still thinks linearly, programs to specific requirements as they exist in a point in time, and lacks the ability to evolve a software solution without repeatedly involving project management, quality control, and requirements gathering. This costly approach is so disruptive to the business, that years pass before needs are met. Productivity slips into “interim” solutions, spreadsheets, or electronic documents shared over file systems.
The idea of software adapting to change actually began in 1970 with the publication of E. F. Codd’s paper “A Relational Model of Data for Large Shared Data Banks“. Before the application of the Relational Model, data would coexist with processing logic, and follow structured hierarchies or fixed process-specific networks tied to the process logic. A relational data model allows the data to exist distinctly from the current processing logic, so that the logic can adapt to process changes without requiring highly disruptive changes (i.e. re-engineering) of the data stores.
Requirements volatility, or business-need changes evolving over time, are manageable when attention is paid to the structures of data storage. Relational database design specifically addresses this problem by organizing data into meaningful sets that can be adapted easily as the needs of a business change over time. This is possible even if those changes in requirements are largely due to an evolving understanding of the semantics under analysis.

3) Quality Improves with Evolution, not Revolution
Software quality is measured by data integrity, usability, and performance. Data integrity is compromised by poor security, bad storage design, program logic errors, and disruptive re-engineering conversions. Usability declines when important data elements cannot be captured or clumsy user interfaces lower user productivity. Poor performance is separate from usability when response times are affected by inadequate hardware or inefficient algorithms.
Because software codifies business processes, its quality evolves over time with business process experience, and is not necessarily improved by new technology. In fact, new technology can regress business processes to a less mature and useful state, forcing employees to build their own solutions, disjointed from each other. They do this to improve their own productivity and satisfy the changing information requirements of their superiors. Technology vendors thrive on this cycle, alternating selling to department heads with selling to enterprise chiefs to satisfy varying needs for change.
Technology does not change as fast as vendors would have you believe. After a few years of experience learning new and improved languages, frameworks, and operating systems, it becomes quite apparent that “new” is always “different”, but perhaps not necessarily “improved”, except for the fact that it remains supported by the vendor, whereas “old” is no longer supported by the vendor.
The take-away should be that more attention should be paid to enabling business process improvement and less to off-the-shelf solutions as a panacea for productivity.

4) Authority Requires Accountability
Distributed software development is a self-inflicted challenge. If your software development is a “process” untied to knowledge of your “business”, then you probably don’t need to be developing anything. If your business process needs are changing over time, then you probably want to retain your subject matter experts to evolve your software rather than requesting proposals for redesign. The transformation from requirement to solution will then be much smoother.
Giving stakeholders responsibility for the success of their software development projects requires both authority and accountability. If stakeholders don’t understand the technology, then it is easy for them to place blame on the technologists who don’t understand their business processes, and visa-versa.

5) “What” before “How”
Software development teams (including subject-matter experts, users and other stakeholders) need to distinguish the transactional data from the reference data. They need to clarify conceptually what uniquely distinguishes one “thing” from another “thing”. They need to associate things that belong together. In essence, they need to focus on “What” they need.
The Relational Model of data allows stakeholders to organize their information logically. But no software tool will make the decisions needed to clearly define the data elements and their relationships to each other. Successful software development starts with defining “what” they need entered, edited, searched, and sorted, before deciding “how” users interact with that data. If you get that right, then most of the other problems with software design will be greatly diminished or eliminated entirely.

Leave a Reply

Your email address will not be published. Required fields are marked *