Design, Develop, Create

Tuesday 25 September 2012

Readings: Agile critique and comparison

Cusumano, M. A. (2007) Extreme Programming Compared with Microsoft-Style Iterative Development. Communications of the ACM, 50, 15-18.

Williams, L., Brown, G., Meltzer, A. Nagappan, N., (2010) Scrum + Engineering Practices: Experiences of Three Microsoft Teams. International Symposium on Empirical Software Engineering and Measurement. (link)

Kruchten, P. (2007) Voyage in the Agile Memeplex. ACM Queue, 5, 38-44.



In 1999 the world of software engineering was disrupted by the emergence of agile methods, first Extreme Programming, then the Agile Manifesto followed by Kanban, SCRUM and others. All created as reactions to the then prevailing consensus if not hegemony of stage-wise (aka Waterfall), intending to upend the management heavy methods then prevailing in industry. Today, the worm has turned. The current dominance of "Agile" (with a capital A) creates the impression of a new hegemony; that we should all be Agile, the managers are SCRUM masters, programmers turn backlogs into features, that everything is done in iterations, releasing continuously, designing rapidly, working in Sprints, etc etc. (Higgins,)



Readings: Creativity & Teams


Curtis, B., Krasner, H. & Iscoe, N. (1988) A Field Study of the Software Design Process for Large Systems. Communications of the ACM, 31, 1268-87.

Hargadon, A. B. & Bechky, B. A. (2006) When Collections of Creatives Become Creative Collectives: A field study of problem solving at work. Organization Science, 17, 484-500.

Sawyer, S. (2004) Software development teams. Communications of the ACM, 47, 95 - 99.



Read the articles and post a thoughtful observation or question on your own blog!


Saturday 22 September 2012

Maintenance (SDLC)

MAINTENANCE AND THE DYNAMICS OF 'USE': On-going development of products in-use.
Maintenance, often turned support, is a crucial activity for linking the experiences of users/customers with the product delivery organization. We consider perspectives on high tech maintenance from bug fixing through to design focused activities.

THE CHALLENGES OF MAINTENANCE
Both soft and physical goods need to be maintained over their economic lifetimes, and the time spent is maintenance is many multiples of the time spent in initial development. It also turns out that usability and scope, which are a key drivers of customer value and usefulness for software (1998, Varian et al., 2004), also drives the generation of multiple versions. A single product codebase can be used to generate multiple versions of the same underlying architecture for the same release date. Adding new features, perfecting and adapting the product continuously increases the scope of a product. In addition the work of maintaining the software also generates new products, subsequent versions and revisions incorporating new capabilities, fixes etc (Figure below). Multiple versions are inevitable, they're part-and-parcel of software, an inherent potential and inevitable consequence of releasing applications based on changing code.

Figure: SDLC as interrelated activities

If we look at Eason’s depiction of an idealized systems development process we can imagine both the user and the technology co-evolving over time as learning is acquired from one and the other. Eason’s hypothesizes that users (and therefore organizations) learn, but they also teach developers how the technology system may evolve over time.
“The exploitation of information technology requires a major form of organizational and individual learning… The exploitation of the capabilities of information technology can only be achieved by a progressive, planned form of evolutionary growth.” (Eason, 1988)
The evolutionary development of systems grows, from limited basic functionality towards more sophisticated and capable systems over time. Consequently maintenance tools and maintenance thinking has begun to permeate throughout the whole product experience.

Designed change (even for corrective work) is a change to the product, and so the economic assumption that a delivered software product is a finished good is false. The practical reality guarantees that a high tech system will inevitably undergo further change. (Swanson, 1976; Swanson, 1989; Poole et al., 2001) High tech systems undergoing maintenance are often regarded as a ‘mutable mobile’, technology that evolves and changes in use (Law & Singleton, 2005) the idiom of maintenance is employed even though software does not wear out or degrade. Maintenance work is difficult and messy; patches must satisfy new demands without breaking existing installations and work as before (only better).

One aspect of maintenance work that is generally held by programmers is that it is better for your career to work on next generation technology, rather than being stuck on bug fixing or maintaining old versions within the straight jacket constraints of compatibility and legacy codebases (Ó Riain, 2000). Maintenance jobs are therefore often outsourced to low cost locations or shunted into the background noise of the workplace, and so developers often shun the work of maintaining a venerable ‘old version’ as they jockey for assignment to new product projects.

ORGANISING DELIVERY AND MAINTENANCE
Eason (Eason, 1988) describes five main implementation strategies for delivery/deployment, graded according to how radically the system changes (Figure below). From revolution to evolutionary change, imposing likewise a burden on the user to adapt, from difficult to easy. Leonard-Barton (1988) described this same process of high tech systems implementation as a process of mutual adaptation, of gradual convergence of systems functionality and performance over time.

Figure: Implementation Strategies (Eason, 1988)

Two product delivery paradigms dominate high tech development projects: single shot implementations on the one hand and continuous process systems on the other. The two can be likened to the difference in manufacturing between batch based and process centric production. A batch model constructs production as an assembly process; the finished product is built up over time by combining prescribed ingredients or materials in set amounts in sequence. The process model is also recipe driven; however a process based production exercise is continuous. The pharmaceutical industry employs the batch/lot style production model extensively. The food processing and drinks industry blends batch with process control, inputs and variables are controlled over time to produce a continuous stream of end product. In both manufacturing batch and process production the overall design vision is captured in the plan or recipe, a set of instructions to be followed to construct a well-specified finished product. Control and management is focused on reproducing the design to exacting precision at the lowest possible cost over and over again. Both manufacturing models attempt to ensure the production of the product conforms with the known design efficiently, accurately, and cheaply. However neither the batch nor the process models encapsulate learning effects. They are both static production models, mechanistic rather than evolving and highly appropriate in settings where producing the same goods in volume to high quality standards (reproducing the original).

We can see however the influence of these models on classical interpretations of the systems development life cycle. Delivery occurs after an exhaustive up-front design process that concludes with the production of the first copy/version of the system being delivered in the first user installation. The technical design process dominates the early stages of development after which the delivered system imposes learning demands on organizations and users if they are to reap the benefits of the new system. The practical reality is however somewhat different.

We already know however that implementation of high tech systems does not usually coincide with the final delivery of a system. Delivery is in fact one of the most contentious periods of a project’s life. Delivery crystallizes all the anticipated promises of a new system into a single ‘moment of truth.’ The moment of truth is multiple and inconclusive, involving each user and use moment, user interaction and system interaction. Use crystallizes the user’s transition from one system to the next. The transition may be from an existing system to an upgraded system, entailing minor changes in use, appearance and performance. Transition may be more radical, from an existing systems context to a markedly different one that demands difficult and radical changes by users as they attempt to adapt the new tool to use. Radical adaptation may involve new user behavior, knowledge, tasks, and skills, it may also negate existing behavior, knowledge, tasks, and skills. Delivery may also be into a new market, it may displace existing systems, it may even co-exist with existing and competitor systems, perhaps interoperate with them at some level.

MUTUAL ADAPTATION AS A METAPHOR FOR DEVELOPMENT/MAINTENANCE
While new product development projects are the hallmark of the knowledge economy, it is a commonplace acknowledged in the industry that innovations are never fully designed top-down nor introduced in one shot. High tech systems are often developed by being ‘tried out’ through prototyping and tinkering. Eric Von Hippel (2005) traced the history of selected technology innovations and arrived at a pragmatic realization that products continue to be developed even when they leave the confines of a laboratory or engineering shop. He develops the concepts of ‘lead user’ and ‘innovation communities’ and concludes that innovation is a process of co-production shared between the producer and the consumer of a new product. Innovation might therefore be thought of as maintenance; a collective and intrinsically social phenomenon resulting from the fluidity of systems undergoing cycles of design, delivery, learning through use that feeds back into further design, delivery and learning.

Proactive support in the form of ‘digital assistance’ has been built into high-tech systems through help menus, user guides, and tutorials. Automated issue reporting is also used to send reports and diagnostic traces (e.g. state blogs, configuration settings, memory dumps) directly to the producer when failures occur. Automated online updating and service pack distribution is also employed as a means of keeping customers installations up-to-date. Diagnostic and investigated analysis of user-click-streams may also be available for the producer to analyse and respond to actual user/customer behaviour. At the most fundamental level however, support activities must first link customer details with a description of the problem.

User issues fall into three main categories: corrective, perfective, and adaptive (Swanson, 1976). Corrective issues are the classic ‘bugs,’ performance and operational failures. Perfective issues address incremental improvements and performance enhancements. Adaptive issues are concerned with responding to changes in the operating environment thereby introducing new or altered functionality. Only one of these categories can really be considered to address failure. The other two, perfective and adaptive, imply that software maintenance is necessarily a design activity. The problem with software is the complex interdependencies within itself, its surrounding technologies and tools, and the environment it operates within. Therefore, changing software usually generates unexpected side effects.

Over time modern issue tracking systems have themselves evolved into general-purpose incident tracking and reporting environments. These development system applications (e.g. JIRA, MANTIS) are often used therefore as both incident repositories and as planning tools. In this way the maintenance process itself has, over time, evolved from being an end-of-life risk management and repair tracking process into a direct link between the user/customer and the development organization. Maintenance has therefore become a crucial source of feedback and a key driver for new product requirements. Issue tracking systems expanded to include the database of features under development and the workflows surrounding issues have been adapted to manage development itself.

We can conclude that software maintenance and new product innovation projects are more closely related than is commonly accepted. The activities of maintenance and support are important sites for the innovation of technologies in development, at least as important as the work of new product development.

Tuesday 18 September 2012

Evaluation (SDLC)

VALUING, SIZING AND SOURCING THE HIGH-TECH PRODUCT
Evaluation is the process of making the case for high tech decisions based on the benefits and costs associated with a project or product features. Assessing the value and cost of features for development is considered either a simple problem of ascribing value-for-money, or an obscure process, part inspiration part politics where decisions are made behind closed doors.

INTRODUCTION
How do we evaluate high-tech objects and the objectives for systems development projects? Organizations desire to control and direct their destinies. Organizational technology strategies need therefore the support of tools and methods as aides for making investment decisions (Powell, 1992). We need to be able to answer the following questions:

  • How should we go about evaluating high tech use and investment decisions?
  • How useful are the various approaches and what if anything do they ignore.
  • When we select and value high tech features and product how do we go about making those decisions?
  • What different approaches are available to help us evaluate choices between different products, services, features, and suppliers?
Evaluation is the process by which we decide whether or not to purchase or commit ourselves to something. Evaluation activities are by definition decisive periods in the life of any high tech project. Evaluation is often considered to be made on overwhelmingly rational, economic criteria, however it may also be an emotional, impulsive or political decision (Bannister and Remenyi, 2000). This is a plethora of tools available to make optimal financial decisions based on the premise that significant aspects of the system can be monetized. But there are also tools that help us reveal unquantifiable aspects and soft factors, to facilitate the formation of qualitative decisions.

UNDERSTANDING EVALUATION
All decisions arise from a process of evaluation, either explicit or implied. The process of valuing an option by balancing its benefits against its costs. Furthermore decisions arise throughout the development life cycle as and when options are identified. Formally the SDLC describes evaluation as a separate phase and activity, practically however evaluation takes place continuously, albeit with a shift in frequency, formality or emphasis.
Having gathered user requirements by looking at and observing behaviour in the field how do we analyze, judge and identify significant patterns or benefits for inclusion in new developments?

VALUE AND COST
When a high tech investment delivers value in the form of payouts over time, financial tools like ROI and NPV can be used as aides for the decision making process; to invest or not to invest. Improved financial performance is an important criteria for judging an IT funding opportunity. Payouts may take the form of estimable cost savings or additional periodic revenue. While financial performance measures are not always assessable or relevant to all investment decisions or project commitments, they should be created (and their assumptions made explicit) wherever possible as one of a range of inputs into the decision making process.

Financial measures are often the easiest to create and maintain for organizational decision-making as they readily incorporate different assumptions as factors change.

ROI
The return on investment (ROI) is simply the ratio of an investment payout divided by the initial investment. The ROI represents the interest rate over the period considered.

Payback Period
Another metric for evaluating investment decisions is the Payback period. The Payback period is the time taken for an investment to be repaid, i.e. the investment divided by the revenue for each period.

Net Present Value
The Net Present Value (NPV) method takes account of interest rates (or the cost of money) in the investment model. The Present Value (or cost) of an investment is the difference between the investment and the present value of any future net revenues or savings for the period ‘n’ discounted by the interest rate ‘i’. The Net Present Value (NPV) calculation summates the value of an investment decision in terms of the present value of all its future returns.
As with Payback there is usually a simple break-even point for any particular interest rate beyond which future values (payments or annuities) result in a net positive return. NPV is a good way of differentiating between investment alternatives however the assumptions built into the model should be made explicit. For example: payouts do not always occur at the end of the period, interest rates may change, inflation may need to be considered, payments may not materialize or they may be greater than expected.


Internal Rate of Return
Having calculated the NPV of the investment from compounded monthly cash flows for a particular interest rate it becomes evident that there may be an interest rate at which the NPV of the investment model becomes zero. This is known as the Internal rate of return (IRR). The IRR is a calculation of the effective interest rate at which the present value of an investment’s costs equals the present value of all the investment’s anticipated returns.
The IRR can be used to calculate both a cut-off interest figure for determining whether or not to proceed with a particular investment (a threshold rate or break-even point) and the effective payout (NPV) at a particular interest rate. An organization may define an internal cost of capital figure (a hurdle rate) that is higher than market money rates. If the IRR of a project is projected to be below the hurdle rate then it may be rejected in favour of another project with a higher IRR.

Discussion on financial rationality
Investment decisions imply the application of money, but also time, resources, attention, and effort to address opportunities and challenges in the operating environment. An often unexpected consequence of acting on the basis of a rational decision making process is that action alters the tableaux of factors which in turn reveal new opportunities (or challenges) that must needs alter the bases on which earlier decisions were made. Some strategic technology decisions appear obvious; an organization needs a website, an email system, electronic invoicing, accounts and banking. Individual workers need computers, email, phones, shared calendars, file storage etc. This is because technology systems have expanded to constitute the basic operational infrastructure of the modern organization and (inter)networked citizen. How should we therefore characterize the dimensions along which high tech systems are evaluated? High tech investment decisions have been classified into two distinct dimensions: technology scope, and IT strategic objectives (Ross and Beath, 2002). Combining these two dimensions Ross and Beath the identified areas of application for organizations’ high tech investment decisions (Figure below). Technology scope includes categories such as shared infrastructure (with global systemic impact) through to stand-alone business solutions and specialized applications impacting single departments and operational divisions. IT strategic objectives differ in terms of horizon than scope; from long-term transformational growth (or survival) to short-term profitability and incremental gain. Both scope and strategic dimensions highlight the organizational dependence on high tech systems and both suggest that purely financial justifications are not always practical or desirable.

Investment in shared infrastructure illustrates the case; physical implementation and deployment costs of new IT and hardware may be quantifiable, but broader diffuse costs and benefits arise through less intangible aspects such as lost or gained productivity, new opportunities and improved capabilities.

JUSTIFYING DECISIONS
Evaluation is a decision making process. How do we decide what systems to use (develop or acquire, install and operate) in our organizations? We think of the process of making a decision as the process of evaluation. There are two contrasting dimensions to evaluation processes both of which need to be considered if evaluations and the decisions that result are to deliver their anticipated benefits: quantitative methods - often financial models, and subjective methods addressing intangible and non-financial aspects.

Each decision is in a real sense an investment decision for the organization. Identifying who fills what role in the decision making process is a prerequisite and each actor then draws on various methods and tools to make their case. However the decision maker cannot rely on pure fiat or role-power to arrive at the best decision while at the same time achieving consensus and buy-in. Decision makers engage in convincing behaviour drawing on a mix of objective and subjective resources as evidence supporting their decision.

The justification of projects across the range of IT investment types must-needs differ as the cost and benefits differ in terms of quantifiability, attainability, size, scale, risk and payout. We should therefore have a palette of tools to aide evaluation and decision-making. Ross and Beath (2002) make the case for “a deliberate rationale that says success comes from using multiple approaches to justifying IT investments.” Powell (1992) presents a classification of the range of evaluation methods . Evaluation methods are broadly: objective or subjective. Objective methods are quantifiable, monetized, parameterized, aggregates etc. Subjective methods are non-quantitative, attitudinal, empirical, anecdotal, case or problem based. Quantitative methods include financial instruments and rule based approach. Multi-criteria and Decision Support System approaches cover cybernetic or AI type systems that use advanced heuristics or rule systems to arrive at recommendations. Simulations are parameterized system models that can be used to assess different scenarios based on varying initial conditions and events.
Table: Evaluation methods. Adapted from Zakierski's classification of evaluation methods (Powell, 1992)

It is noteworthy however that many of these evaluation methods are in fact hybrid approaches, incorporating both subjective and objective inputs and criteria e.g. Value analysis (Keen, 1981).

In accountancy, measurement and evaluation are considered to be separate, involving different techniques and processes. Furthermore the evaluation process is expected to balance both quantitative and qualitative inputs. Banister and Remenyi (2000) argue the evidence suggests that high tech evaluations and investment decisions are made rationally, but not formulaically. This is in part because what can be measured is limited and processes of evaluation involves the issue of ‘value’ more generally, not simply in monetary terms. Investment decisions must involve, they argue, the synthesis of both conscious and unconscious factors.
“To be successful management decision making requires a least rationality plus instinct.” (Bannister and Remenyi, 2000)
In practice, decision making is strongly subjective, while grounded in evidence it also requires wisdom and judgement, an ability that decision makers acquire over time and in actual situations through experience, techniques, empathy with users, deep knowledge of the market, desires and politics. A crucial stage of the decision making process is the process of problem formation and articulation, both of which reducing problems to their core elements and interpretation “the methods of interpretation of data which use non-structured approaches to both understanding and decision making.” (Bannister and Remenyi, 2000) ‘Hermeneutic application’ as they describe it is the process of translating perceived value into a decision that addresses a real problem or investment opportunity. It is necessary because the issue of ‘value’ often remains undetermined, it may be (variously): price, effectiveness, satisfaction, market share, use, usability, efficiency, economic performance, productivity, speed, throughput, etc.

DICUSSION
When should we consciously employ an evaluation approach? Evaluations are made implicitly or explicitly any time we reach a decision point. Recognizing and identifying the decision point may appear obvious but is in fact often unclear at the time. Evaluation and decisions are made whenever an unexpected problem is encountered, if further resources are required to explore emerging areas of uncertainty, if another feature is identified or becomes a ‘must have.’
Evaluation methods may be categorized further as ex-ante versus ex-post. Ex-ante methods are aides to determining project viability before the project has commenced; they are exploratory forecasting tools and their outputs are therefore speculative. Ex-post methods are summative/evaluative approaches to assessing end results; they are therefore of limited value for early stage project viability assessment.

Systems development life cycles bring high tech product project decisions and therefore evaluation into focus in different ways. Stage-gate models concentrate decision making at each stage-gate transition. Agile models explicate decision-making by formalizing the responsibilities of different roles on the project and their interactions on an on-going basis. Both extremes aim to highlight the following: decision points, the person responsible for asking for (and therefore estimating) resources, and the person responsible for stating and clarifying what is needed (scope and requirements). Indeed, formalizing the separation of role ownership (between requirements, estimation) and responsibilities (between value and delivery) is one of the key benefits of any life cycle.

Is the decision already made for us? As the high tech and IT sectors mature so too do we see the gradual stabilization of software, services and devices that constitute the assemblage of tools and systems of our modern internetworked lives. Nicholas Carr (2005) predicts that we are witnessing the inevitable shift of computing, from an organizational resource (and competence) into a background infrastructure. Several factors are driving the dynamic. Scale efficiency of development; specialist teams best develop complex feature rich usable systems. Scale efficiency of delivery; global service uptime, latency and storage performance is best delivered by organizations with global presence and specialist competencies in server farms, grid and cloud computing. The green agenda reaps savings by shifting computer power consumption from relatively inefficient desktops and office servers into energy efficient data centres. Carr’s point is that general purpose computing is gradually shifting towards a ‘utility’ model therefore the era of corporate computing is effectively dead. The consequence is that the commoditization of software and services, things that are currently thought of as ‘in-house’ offerings like: email, file storage, messaging, processing power etc. The implication of this trend is to change the way we view high tech and IT projects, their evaluation and delivery to our organizations. The decision is no longer one of build versus buy (run and operate) but ‘rent.’

CONCLUSIONS
Requirements and evaluation are crucial activities in the overall process, with the decisive moment surrounding evaluation – valuing and costing product features and projects. But the work of systems development presents complex issues. There are inevitable intrinsic inequalities and asymmetries between the actors involved: product owners, developers, users, customers, organizations, business, and other groups. Interaction is often characterized by processes of persuading others; persuading and convincing those involved in producing, consuming and managing the development process.

REFERENCES

Economic Aspects of Digital Production

SOME (SIMPLE) ECONOMICS FOR DIGITAL MEDIA
One of the central problems in developing high tech systems is that there appear to be unavoidable trade-offs between managing the scale and connectedness of emerging high tech systems. Everything depends on everything else, even well bounded tasks are complicated by unexpected dependencies on hardware or other technologies. What do we need to know in order to identify, describe, and address these various complications and difficulties adequately as they arise? We will begin the process of understanding the problem domain and approaches to addressing its difficulties by illustrating economic aspects of information goods, some work aspects of software engineering (digital production), and the classic dimensions of project management.

New media and information industries are refining if not redefining our knowledge of the economics of markets, products, services and production. Broadly the challenges involve information goods, high tech systems and bases or markets of user/consumers. However, while the business models, technology and material foundations of these new ‘goods’ are constantly changing, the principles of economics do not. There are many dimensions along which information goods and systems are different from purely material products and services. Information goods are ‘experience goods’ (‘consumed’ by experiencing or operating), they are subject to the economics of attention (if you are not paying for the product you are the product), and the technology itself is associated with pronounced production side scale, product feature scale, and greater potentials for user ‘lock-in,’ ‘switching costs,’ and ‘network externalities’(Shapiro and Varian, 1998). For the purpose of this section we will focus on the economic case for production side scale economies of software. The same logic applies to other digital media and products with digital media components.

Unlike many physical goods, information-rich products have some distinctive economic qualities and characteristics. Like all information-rich goods (products like computer hardware, books, newspapers, film, television) the initial design and production of the first copy of a software product demands a huge up front investment in development before there can be a payout. Unlike physical goods manufacturing, the mass reproduction of a digital or information rich good is a simple trivial act of copying. For pure digital goods there are vanishingly small incremental costs in terms of the energy used, storage space and time taken to duplicate. In software manufacturing (if such a term can even apply in the current era) the development costs far outweighs the reproduction costs. Development costs dominate the economic cost characteristics of software.

The following presentation is adapted from Oz Shy’s book, The Economics of Network Industries (Shy, 2001). The argument goes as follows; that as sales/consumption of the product increase the average cost of product approaches the cost of producing and delivering the next unit (the marginal cost). In the case of a purely digital good (software, information, media, etc) the marginal cost of production is very small and often carried by other parties (e.g. the broadband service). Applying the logic of cost based pricing model to your product suggests a strategy of effectively giving it away.

The total cost of production at a particular level (TC) is the sum of the sunk R&D costs plus the cost of producing and shipping ‘q’ units. By definition the total cost of production at a particular level (q) is the sum of the cost of R&D (cost of developing, testing and releasing software) plus the cost (μ) of shipping one copy to the customer.

1. TC(q) = θ + μq

If we define the average cost (AC) of production of a product as the cumulative total cost of production at a particular production level divided by ‘q,’ the quantity produced (ideally also sold) at that level.

2. AC(q) = TC(q)/q

The average cost becomes:

3. AC(q) = θ/q + μ

The marginal cost at a particular product level, or additional cost as a result in a small increase in the production level is the incremental additional cost divided by the change in quantity produced.

4. MC(q) = ∆TC(q)/∆q

And in the limit (the differential wrt q of equation 1).

5. MC(q) = μ

A graphical analysis (below) of average and marginal software production cost as functions of quantity demonstrates that the average and marginal costs converge at high output levels (Shy, 2001).


Figure: Cost and price characteristics of software (adapted from Shy, 2001)

The implication of this analysis is that for every price you set there exists a minimal level of sales for which any additional sale will result in a profit. One conclusion from this argument is that ‘cost based pricing’ is not a viable strategy for software because there is no unique break-even point (Shapiro & Varian, 1998). In effect the logic follows that the more units you sell the lower you can set your price. The logic of cost based pricing suggests you should charge very very little, or give software products away (if there is a large potential market for them). So software markets are subject to huge economies of scale.
“the more you produce the lower your average cost of production.”
(Shapiro and Varian, 1998)

HIGH TECH PRODUCTS AND PLATFORMS AS ECONOMIC SYSTEMS
Previously we characterised the dominant aspect of high tech products to be: their intrinsic complexity and propensity to change. Both aspects lend high tech products to exhibit ‘systemness’ or systematicity within their environments. For example software itself may be both a product and a platform. To illustrate; interdependencies arise whenever a software program makes functions accessible via an API (Application Programming Interface). APIs allow other programs to use the first program. The consequence is that a combination of the two programs allows us to accomplish something new that we couldn’t do with the separate programs. These effects are termed complementarities and it gives rise to system-like effects in the computing environment and in the market for high tech products and services (Shapiro and Varian, 1998).

Complementarity and Combinatorial Innovation
Digital goods are amenable to exhibit complementarity and produce novel utility through combinatorial innovation. When goods that are complements are produced the combination of two products become more desirable and valuable to users than the products alone.

Furthermore we can show that the economic effect of complements dictate that
“aggregate industry profit is higher when firms produce compatible components than when they produce incompatible components.” (Shy, 2001)
The reason being that the sunk cost of R&D can be averaged over a larger market; and larger markets are generally better for all firms, even competitors regardless of their market share.
“the firm with smaller market share under compatibility earns a higher profit under compatibility.” (Shy, 2001)
This is because the market itself is generally larger, thus the marketing strategy question ‘do you want a large piece of a small pie, or a small piece of a much larger pie?’ Why is this relevant? It is relevant because it is one of the dynamics that drives change in the operating environment of organizations. Synergies in Internet services and platforms have driven constantly expanding integration and adaptation, change and innovation. The internet boom of the 90s through to today is largely a consequence of 'recombinant growth' or combinatorial innovation of general purpose technologies (Varian et al., 2004). The idea of combinatorial innovation accounts in part for the clustering of waves of invention that appear whenever some new technology becomes successful. The ubiquity of one program can act in turn as a platform for other programs; for example the mutual complementarities between Twitter, Bit.ly, and Facebook. Much of what is termed Web 2.0 computing can be thought of as leveraging complementarities of different technologies that in turn creates clusters of innovation.

Compatible Products are Driven in Turn by Market Standards
Markets incorporating complements and compatible products welcome technological standards (Varian et al., 2004). Standards are desirable because they facilitate complements and compatibility. Open standards are better because of the free availability of technical rules and protocols necessary to access a market. However even a closed or proprietary standard is preferable to none as it provides an ordering influence, providing rules or structures that establish and regulate aspects such as interoperability or quality. Network effects arise from the utility consumers gain from combinations of complementary products (Shapiro and Varian, 1998, Shy, 2001, Varian et al., 2004).

The very simplest network effect can be illustrated by the example of fax machines. The first purchaser of a fax machine has no one to send a fax to. A second fax machine bought by the first buyer’s friend allows them to send faxes to each other, which is somewhat useful. However if there are thousands of fax machines, in firms, government agencies, kiosks and people’s homes then the fax machine becomes more useful to everyone. As the market becomes larger the usefulness, or utility, of fax machines as a class of technology becomes greater.
The principle applies equally to single categories of networked technology like fax machines as it does to families of technologies that can interoperate. Network externalities arise between automobiles and MP3 players if auto manufacturers install audio jacks or USB ports to connect the car’s sound system with the MP3 player. The utility of both cars and MP3 players increases. Standards and openness drive further growth and innovation (and lock-in and switching costs etc). Standards enable software markets that in turn enable hardware sales that enable software etc. all enabled by a standard.

Software has a unique role as the preeminent enabling technology for hardware and has unexpectedly led to software becoming the platform itself. Software – operating systems or execution environments like browsers and browser-based ecosystems like Facebook – enables developers to achieve a degree of independence from the hardware. Such platform software becomes essentially a new type of standard that may itself be open or proprietary and the same economic models dealing with complementary goods and compatibility apply. Therefore the same kinds of innovation clustering producing waves of combinatorial innovation, can be seen to occur with successful platforms.

A software platform benefits from the variety of add-in software written for the platform and this in turn generates a virtuous cycle of value growth and further innovation as products are re-combined and used in novel ways. In summary the economic characteristics and market logic of software products drive them towards interdependency with other software, standards play a huge role in enabling this (closed or open). The whole context of software production exists within an ecosystem of different products and services, which are in effect environments or platforms themselves and these arguments explain in part the ever-expanding ubiquity of software within technological systems.

DISCUSSION
The various engineering professions (civil, mechanical, chemical etc.) typically separate design work from production work, treating production via either the project management perspective for once-off style constructions, or via the process control perspective for managing operational environments. However software software production (design and development) has proven difficult if not impossible to control via predominantly construction perspectives or as manufacturing processes, why is this? Why shouldn't software engineer lend itself to the kinds of management instruments that proved so successful in the classical sense of Fordist production? Why isn't software like more like civil engineering for example?

Well, the digital economy is subject to some interesting essential and intrinsic characteristics that, while not absent in physical goods markets, occur to greater and lesser extent in comparison to physical goods. In the case of digital production the process of manufacturing the end product becomes a trivial exercise of electronic duplication with marginal costs of manufacturing additional copies being effectively zero. Therefore the production cost characteristics of software and many high tech goods shift to a focus on the process, effort and cost of producing the first unit. Software is costly to develop but cheap to reproduce. Multiple copies can be produced at a constant or near zero per-unit cost. There are no natural capacity limits on producing additional copies. The costs of software production are dominated by the sunk cost of R&D. Once the first copy is created the sunk costs are, 'sunk'! Software production costs will therefore be dominated by employee/human costs (salaries and servicing the working environment) rather than material costs (computers).

This initial analysis seems to suggest that software development efforts should be treated like stand-alone projects, i.e. time bounded design and development of a finished product. This is indeed a characteristic of many industry settings, e.g. for device/hardware software in telecommunications, for robotics, for mission critical systems in aviation and aerospace, for critical infrastructure such as energy distribution and core or internet backbones.

Software design and development produces little if any substantial material assets or residues. Software production models should therefore emphasize design activities rather than manufacturing activities. Software R&D (the cost of developing, testing and releasing software) is a human knowledge intensive activity. The consequence therefore is that while a software firm’s strategic advantage is manifest in its products, its competitive capability is bound up in its employees’ design knowledge and experience.

But software and high-tech yields a new kind of cornucopia, a wealth of value that is becoming more significant and more freely available. Software begets software and systems support other systems. The whole technological infrastructure of microprocessor led, computer driven, software and high-tech device innovation has kept producing value and benefits for organisations, markets, and society at large for 50 years or more. The fact that it continues to evolve and is still implicated in societal transformation suggests it will continue for a while longer.

REFERENCES


Thursday 13 September 2012

Implementation (SDLC)

USE-PRODUCTION-INTERACTION
The heart of a systems development production process is the work of implementation; designing, coding, testing, usability, scaling, architecture, refactoring. Its flip-side is the system in use, the feedback of users, usability in practice, unexpected uses, the goals users actually achieve by using the system, their met and unmet needs, how they obtain value from its use. In some sense the problems of production, or organizing teams to develop and maintain complex interdependent and interrelated digital systems is largely solved. Production poses a relatively well-known domain of problems and we have a variety of possible solutions available to address the challenges of intrinsic complexity and task interdependence, of scale and size of production, products and markets. What is less well understood is the domain beyond engineering; of the dynamics between customers, users, producers and the market, what we term ‘systems.’

Producing implementations for high-tech ambitions. ‘Implementation’ is the catch-all term for those production activities following at the end of an up-front requirements analysis, evaluation, and ‘design’ process (Bødker et al., 2004, Gregory and Richard L., 1963, Avison and Fitzgerald, 2006). Under this view of ‘implementation’ as the catchall for design, architecture, coding, testing, refinement, optimization, packaging and finishing a high tech project.

IMPLEMENTATION: DESIGN, TEST, AND DELIVERY
In the (traditional) view of systems development the SDLC brackets everything to do with concrete product production under the banner ‘implementation.’ (Figure below) Implementation covers product design, development, test and delivery. It appears strange that such wide-ranging and yet central activities of the SDLC should be relegated to what appears at face value one quarter of the lifecycle.

Figure: SDLC as interrelated activities

I might argue on this basis alone that the SDLC perspective on implementation is too broad (indeed dismissive of ‘production’) to be of much practical use. Let us however focus on contemporary views of implementation in high tech product life cycles.

Implementation has two faces, a technological facet and a social facet. Implementation covers everything dealing with the concrete realization of a product, everything that is hinted at during the more abstract phases or activities of requirements analysis, evaluation/design and maintenance (these comments must of course by qualified by your own working definition of the SDLC). On the technological side implementation deals with design, architecture, feature functionality, deployment, installation etc.; on the social side implementation deals with feature acceptance, usability, scalability. What is then does implementation encompass? Implementation may be viewed as construction (production). Implementation also often covers as rollout or delivery, and a third meaning of implementation is that surrounding organizational change management, in particular change supporting ERP implementations. In the case of ERP implementation the technology system is often quite static, a finished product, however flexibility available in how the product is ‘configured’ to deliver functionality. ERP configuration is therefore a more limited kind of systems development that may or many not work well within the institutional constraints of a particular organization. One way of thinking about implementation is as a problem of ‘introduction,’ something taking place in the conversational interactions surrounding analysis, design, coding, and test activities.
“The roll-out is where theory meets practice, and it is here that any hidden failures in the earlier stages appear” (Boddy et al., 2008)
(Boddy et al., 2008)
Accordingly the byword for an implementation initiative is ‘order.’ A project should rollout in an orderly controlled way. However large-scale rollouts of technology are notoriously difficult and range over technological challenges and social/organizational challenges, for example;
“ERP implementation is an organisational change process, rather than the replacement of a piece of technology. It impacts strategy, structure, people, culture, decision-making and many other aspects of the company.” (Boddy et al., 2008)
Implementation is therefore often characterized as a project management problem rather than a problem extending and impacting activities prior to and following production. In this guise implementation is a matter of project execution, separate from the ex-ante (up-front) process and separate from the ex-post (delivery) process. Such implementation projects more often than not necessitate further analysis, evaluation, and design alongside the work of coding, configuring and testing a new system.

MUTUAL ADAPTATION
Regression life cycles and agile methods have reworked the relationship between the activities of the SDLC. The Rational Unified process and methods like SCRUM anticipate all activities and phases will occur at the same time. Both overcome the chaotic consequence of ‘doing everything at once,’ by mandating highly structured roles and interactions, many mediated through distinctive techniques like ‘the planning game,’ ‘planning poker,’ ‘the on-site customer,’ ‘refactoring,’ ‘regular releases,’ ‘unit testing,’ etc. The big message for test and design work is that you can’t design without testing, and testing in all its guises is one of the strongest drivers for design.

The greatest test, and opportunity, for a new technology is when it is removed from the laboratory into the user environment. Implementation is the process of:
“mutual adaptation that occurs between technology and user environment as developers and users strive to wring productivity increases from the innovation.” (Leonard-Barton, 1988)
Implementation is therefore a natural extension of the invention process albeit the process takes place within user environments. The dynamic can be thought of as a kind of convergence towards an ideal end goal. However acknowledging the concept of equifinality (Leonard-Barton, 1988), our end goal may simply be the first solution that works from among a universe of possible solutions.

Implementation in the user environment generates learning that redefines our understanding of technology-in-use and therefore draws us back in to new prototyping, testing, feasibility, problem solving and idea generation. Likewise, technology implementation in the user environment generates new learning bout possibilities in user and corporate performance. Technology interaction enables possibilities for redefining tasks and roles, business function and business model. Learning through implementation is a balancing of tension between narratives of technologically driven change and user resistance. Instead Leonard-Barton offers the idea of continuous ‘re-invention’ to interpret this tensions, of learning through implementation that feeds in turn back into technology and corporate performance thereby enabling the productive (though unpredictable) dynamic of mutual adaptation (Leonard-Barton, 1988).

While most current presentations of the technology development dynamic now include user involvement they persist in characterising innovation as a flow from idea generation through to production. However including deploying in the user environment and user involvement within an on-going cycle of releases and updates incorporates the impact of learning that occurs. Mutual adaptation is a constant in the field of technologically mediated innovation and if recognized may be harnessed as a productive dynamic to drive both social and technologically oriented aspects of systems development. The implication for organizations involved in systems development is to “break down the firm separation of development, test and operations.” (Hamilton, 2007)

Kongregate Games (case)

This case is adapted from Nicholas Lovell’s game publishing guide (2010).

You run a small Flash Game company that releases its games to run on Kongregate’s game portal. The revenue model offered by being hosted on Kongregate’s portal is ad-funded based on how often the game is played online. The company development team has four people: 2 programmers, and 2 designers with responsibility for art assets, models, audio and video content. The Ad-funded revenue model is summarized in the table below.
Table: An Ad-funded revenue model on Kongregate for Flash games (Lovell, 2010)


Under this model, and assuming a minimum return from portal operator to the developer of just 25% of Ad revenue (best case may be up to 50%) – assumption of just two Ad impressions per game play yielding a CMP from advertisers of 1 euro – developer revenue ranges from €5 per month to €500 per month for each game as monthly plays vary and impressions range from 20,000 to 2,000,000 per month (Table below). CPM: A figure used to express the advertiser’s cost per thousand impressions. CPM=Gross impressions/1,000. In this case we assume Kongregate and advertisers have agreed a CPM ‘cost’ rate of €1. Note that Lovell’s figures are based on a CPM of £1 (GBP).
Table: Game Ad-revenue projections for three cases of plays (from Lovell, 2011)


Lovell suggests that the revenue figures for an Ad-funded Flash game served via a specialist game portal like Kongregate are not impressive.
“Even a widely successful game, getting 1 million plays a month, which would be a huge achievement, would only generate [€500] a month in revenue for the developer.” (Lovell, 2010)
Strategic Business Evaluation: To Integrate with the Portal API (or not)?
The development team are keen to increase the company’s revenue stream and have decided to consider the case for integrating their Flash game with Kongregate’s API for leaderboards and challenges (refer to the earlier statement for Ad-funded revenue on Kongregate). Integrating with Kongregate’s API offers the developers an additional 10% share of the Ad-revenue from Kongregate. How do they assess the business case for API integration with the current game (noting that it could become a part of all future games too)? The team estimates it will cost them 20 days of a developer’s time to code up, test, and roll out integration between the portal API and their own Flash template engine. Given that they have a programmer ‘day’ cost of €200/day the investment cost for portal integration, developed over 20 days comes to €4,000. The developers estimated best case investment cost and best-case additional cash flow as follows:
  • Development cost (initial investment) €4,000
  • Additional monthly revenue for 24 months (best Case III) €200
Questions:
  1. What is the simple ROI for each case?
  2. What is the simple Payback period for each case?
  3. Which business case holds up over 2 years with a short-term interest rate of i=5%? 
  4. Finally, should the development team invest in integration?

REFERENCES
Games Brief - The Business of Games (www.gamesbrief.com)
Lovell, N. (2010) How to Publish a Game, GAMESbrief.
www.kongregate.com: "Reach millions of real gamers with your MMO, Flash, or social game. Make more money."

Build the right thing (Interaction Design)

THE DESIGN PROCESS
What is good design? Bill Moggridge states that:
“good design has always been concerned with the whole experience of interaction” (Moggridge, 1999)
Outwardly design is concerned with aesthetics, experience, the experience of using a product, of interacting with an object, product, service, or system of products and service. Inwardly design is also concerned with cost of materials, complexity of assembly, maintainability of modules and the whole product, lifetime, cost of operation, manufacture, distribution, delivery and return systems. The inward and outward aspects of design are tightly interrelated but further complicated (beneficially it turns out) by user involvement in the development process. User involvement in development is now recognized as one of the key success factors for high tech design and systems implementation (Leonard-Barton, 1988, Kraft and Bansler, 1994, Bødker et al., 2004, Grudin and Pruitt, 2002). User involvement is beneficial, in part at least, because both user understanding and design objects can be adapted throughout the development process (Leonard-Barton, 1988).

The quest for design should be tempered by the various problems 'improvements' produce. The search for an optimal solution is often an unnecessary diversion. Indeed an optimal solution will typically optimize according to a narrower set of criteria than is practical or desirable in the general situation. As Hamilton comments on designing and deploying internet-scale services “simple and nearly stupid is almost always better in a high-scale service” (Hamilton, 2007). In Hamilton’s case he recommends optimizations should not even be considered unless they offer order of magnitude or more performance improvements.

The ultimate measure of success for high tech design is for the product to become a seamless aspect of the user environment; to become simply, a tool for use, ready-to-hand.
“We need to be able to rely on an infrastructure that is smoothly engineered for seamless connectivity so that technology is not noticeable.” (Moggridge, 1999)
Put another way, design succeeds when it disappears from perception.

DESIGN QUALITIES
Good design ‘lends itself to use.’ With physical objects the designer works within the constraints (and possibilities) of materials and space. The user’s embodied capability and capacity influences the size, shape, and apperance of a ‘use’ object. Physical design works with material affordance and constraints. Designers make use of experiential and cognitive cues such as ‘mapping’ and ‘feedback’ to achieve their goals (Norman, 2002). These approaches work because users form mental models or theories of the underlying mechanisms employed in mechanical objects. Indeed users actively look for such cues when confronted by a different or a new object for use. The effectiveness of cues, to translate designed performance into viable user mental models, translates in turn into effective object interaction; 'good design lends itself to use'. Good design is evident by the availability of a ‘clear mental model’ (Moggridge, 2006) or metaphor for a system. An effective mental model builds seamlessly into a coherent consistent ‘system image’ (Norman, 2002). A compelling system image is another strong indicator for successful system use. However digital media, virtual goods and computer based high tech systems pose a unique set of problems as a consequence of the break between an individual's knowledge of the physical world (intuitive, embodied, physical and temporal) and the computational world of digital objects.
“What do you get when you cross a computer with a camera? Answer: A computer! (Cooper, 2004)
Microprocessor based goods and computer mediated virtual environments can made perform in apparently arbitrary or idiosyncratic ways, what Alan Cooper terms ‘riddles for the information age.’ (Cooper, 2004) In essence, by crossing computers with conventional physical products subsequent hybrid products work more like computers than their physical product forebears . In the past physical-mechanical elements often constrained design implementation whereas digital designs can in general overcome the constraints of electro-mechanical mechanisms. This break is both empowering and problematic. Empowering because it enables the designer to achieve things impossible with physical-mechanical elements alone, but problematic because while the 'back-end' digital design may conform with an architectural view of the technology (is architecture simply another way of saying the developer's implementation model) the outward appearance and behaviour available to users may be manifest in quite different ways. Mental model thinking can be problematic because, while the design implementation model may be self-consistent and behave logically according to its own rules, the implementation rules will appear be obscure, be overly detailed, or unintuitively linked to performance.

This break between implementation model and the user’s mental model is significant and necessitates a new language for describing and designing digital systems. While digital systems must obey their own (necessary) rules, the presentation of a system to the user should be designed with the user in mind. Taking his cue from physical goods design Don Norman suggests that a well designed microprocessor or computer-based system should still present its possibilities in an intuitive way (Norman, 2002). It should give the user feedback, allow the user to correct performance and offer a coherent ‘mental model’ to enable the user to understand and learn the product through use (Cooper et al., 2007, Norman, 2002).


The design of digital interaction can be thought of as spanning four dimensions (Moggridge, 2006). One dimensional linear or textual representations such as text, consoles, voice prompts etc. Interactions building on two dimensional visual or graphical renderings; layouts that juxtapose graphical elements or that depend on spatial selection and use/interaction in a two dimensional field . Third dimensional fields that make use of the third spatial axis depth , where depth is actually employed rather than simply mimicked through perspectival representation (e.g. as a backdrop to essentially 2D interaction). The forth dimension is most often thought of as time , meaningful temporal sequences and flows of interaction (rather than simply consuming a recording or animation). Temporal interaction may be applied to the preceding dimensions and involve complex interaction choreographies that are built up over time to achieve some goal.


  • 1D interactions are employed by command line driven computing environments.
  • 2D interactions are employed by typical applications and PC operating systems.
  • 3D interactions are employed in immersive gaming environments.
  • 4D interactions may be mode shifts in application interface, queries applied to data, different application states.

Build the thing right (SDLC)

THE VERY IDEA OF SYSTEMS DEVELOPMENT
While the idea of the SDLC (Systems Development Lifecycle) is firmly embedded in the Information System field, there is no single concrete principled formulation of the SDLC ‘sui generis.’ It is notable that the earliest formulations of systems development (Table 1: (Gregory and Richard L., 1963)) resonate strongly with current presentations (Valacich et al., 2009). Gregory and Richard (1963) described the four phases or stages involved in creating a new information system (Table below).
From Scrapbook Photos
Figure: Management-information-systems design approach (Gregory & Richard, 1963)

All formulations of the SDLC are derivative of other lifecycles described and used in practice prior to the various distillations of the SDLC. In spite of claims to the contrary there is no single authoritative well-understood methodology for managing the development of information systems. Each methodology is either the product of a particular group of people working in their specific work contexts, or the output of an academic or practitioner attempt to construct a generalizable description of development processes. The systems development life-cycle is a stage-wise representation of activities commencing with the most general description of some product to be designed and refined over stages into a completed good (Figure below).
From Scrapbook Photos
Figure: Systems Development Life Cycle

The following table (Table below) summarises the conceptual stages of the SDLC. It is readily apparent that the systems development life cycle is synonymous with the waterfall model and that the waterfall provides us with many of the original concepts that comprise most if not all of the features of frameworks used to control and manage the production of high tech goods.
Table: Stages of the SDLC (adapted from Avison and Fitzgerald, 1995)
From Scrapbook Photos

DISCUSSION: THE PRACTICAL REALITIES OF DEVELOPMENT
The SDLC is the original prototype of the life cycle. Linear, serial, stage-gate or milestone development life cycles are employed in product disciplines and in strategic models of competitive innovation (Schilling, 2005, Tidd et al., 2001, Trott, 2005). Life cycles applied in other industries and occupations overlap with the work of high-tech design and development and influence in turn how systems development is seen to be structured. Like the SDLC the product marketing life cycle covers: initial concept, to development, to market maturity and end-of-life. However while life cycle archetypes represent the relationships between analysis, design, implementation and maintenance, they do rarely describe their practical performance and accomplishment.

The work of developing, configuring, and servicing systems occurs within activities and processes such as service provision, project execution, product development and maintenance. These activities are located in time and place and so the day-to-day, week-to-week flux of production takes on the appearance of regularity, of a common pattern to process of creating and managing high tech objects. Systems development can be shaped with the aide of a life cycle model. Life cycle is simply a way of describing the relationships between work processes constituting the provision, development and delivery of a product or service. Having however taken a critical perspective on the SDLC and life cycle concepts generally I wish to explore and explain the value and need for their activities, albeit activities that often overlap, are 'out of sequence' and occur in haphazard and emergent fashion. The following sections analyse the generic characteristics of the core activities of systems development; summarised here as Requirements, Evaluation, Implementation, and Maintenance (below).

Figure: The SDLC as a tetrad of inter-related activity.

REFERENCES

Tuesday 11 September 2012

Outsourcing


Henry Ford’s Model T was the emblem of modern manufacturing systems characterised by suppliers and integrators working together to create value.
Industrial production, contracting, subcontracting and contracting out have been defining features of modern organisational forms since the industrial revolution and perhaps prior.
Two main organisational forms have prevailed in the modern era:
  1. Vertical integration, managing and owning the whole value chain process from procurement of raw materials through to production of end product.
  2. Horizontal specialisation, focusing on crafting/creating/delivering excellence at once core stage of the process of production before passing the processed good onto another stage.
  3. Fordist manufacturing created conditions for interfaces between different tasks, activity, input, output, or stages of transformation making up the manufacturing process.
  4. Japanese Kanban system is one extreme of layered specialisation from many small suppliers coming together under the umbrella of the main supplier/contractor/manufacturer in the manufacturing environment.
  5. Inter-firm information/data process specialisation was enabled by EDI (Electronic Data Interchange) standardisation initiatives from the 1970s through to the turn of the century, now continuing under the aegis of XML and newer standards.
EDI enabled ‘e’ interfaces to be constructed between firms in a similar way to the input/output models of staged manufacturing.
Along the way it demonstrated the overcoming of geographical, spatial and temporal barriers to data exchange.
The modern global supply chain is an extreme case whereby a process’s implementation is facilitated by data exchange between a diverse array of firms thereby creating the very possibility of an integrated supply chain.


As an outsourcing destination Ireland has lost appeal throughout the last decade.
Rising costs and competition from developing countries has eroded many of the advantages that Ireland once held.
Consequently Ireland has itself become a net consumer of outsourcing services.
A driver of this trend is the steady erosion in competitiveness in Ireland at country level, a trend which has been in place since the mid-1990s.
The Irish Central bank quarterly bulletin January 2010 provides harmonised competitveness indicators (HCIs) for the Irish economy. Cost driven deterioration in Irish competitiveness has been partially compensated for by increases in productivity but only it appears by shifting lower cost lower value added activities and processes offshore.
The picture for IT outsourcing is however less clear as Irish based offices of multinationals move up the value chain.
Like all mature markets Irish firms and multinationals based in Ireland often outsource organisational function activities to local or international-based outsourcing providers; for example traditional areas like payroll, accounting, finance, legal, HR, purchasing, and logistics, but also such as marketing focused on SEO (search engine optimisation), web development, website hosting, IT services like e-mail and spam filtering, virtualised storage, and telephony services.
Core or primary value processes may also be outsourced but at a higher risk or for reasons other than cost reduction alone.
Whether providers of outsourced services based in Ireland themselves source their activities offshore or not is a matter for their own operations.


Claims for the size of Ireland destined outsourcing activities and Ireland generated outsourcing vary, range widely between 100s of millions and billions.
Helpdesk and international call centre operations is one area where firms still see value in Irish based operations, particularly where multilingual skills and addressing the European market are important requirements.
In 2003, the value of the outsourcing market in Ireland passed €209 million ($234 million).
Irish banks have outsourced considerable operational activities to third-party providers (e.g. Bank of Ireland's multi-year deal with HP followed by the switch to IBM).
The public sector in Ireland also has long experience with outsourcing services particularly IT (e.g. The Irish Revenue Commissioners and Accenture).
Regardless of the provisioning destination (whether onshore or offshore) the trend is for organisations to increase the investment in outsourcing projects.
Even so firms experience with the outsourcing phenomenon is mixed as expectations to deliver higher levels of service grow and priorities change from simple cost reduction towards valued added.
Regardless of the experience with individual projects outsourcing is likely to remain a popular option with over half CIOs in Irish firms having budgets cut in 2009, cost saving will remain a huge driver or outsourcing initiatives.


HP Video Podcast: Be "on the business" for strategic IT and outsourcing
Tim Hynes, IT Director Europe, Middle East & Africa, Microsoft
HPVideoBlog_01

Why Global Sourcing?
I argue that the sourcing phenomenon is an intrinsic feature of human societies that is amplified by scientific advance, manufacturing innovation, technology more generally, and accelerated in the modern era of computer based infrastructures, high-tech products and services.

What organizational activities and products are amenable to sourcing beyond the traditional boundaries of organizations? And if activities and products can be sourced beyond the boundaries of the organisation what models or modes can be used?

Outsourcing isn't a business fad, it is a fundamental part of modern industrial production. Capital based manufacturing and production of goods and services is predicated on the basic idea of a division of labour. Specialised stages of manufacture, in other words a supply or value chain exist when skilled work is applied to some material, goods or activity to add value until an end point when the good or service is consumed. All industrial and professional specialisation represents therefore a kind of outsoucing. No one organisation, firm or individual has within its power the totality of knowledge, skills, resources, effort and time to produce everything we need or desire. Sourcing has therefore been and remains an intrinsic aspect work (labour and production) in society, from the most rural to the most metropolitan.

What therefore is sourcing? Consider the following definition:
“Sourcing is the act through which work is contracted or delegated to an external or internal entity that could be physically located anywhere. Sourcing encompasses various in-sourcing and outsourcing arrangements such as offshore outsourcing, captive offshoring, nearshoring and onshorning.” (Oshri et. al, 2009)
In light of the prominence and pervasiveness of inter-firm sourcing what are the advantages and disadvantages of different sourcing modes and how are they justified and applied in historical and contemporary settings? The current situation is never completely estranged from its historical contexts. Historical trends in global sourcing lead in to current topics and help to explain how local conditions have evolved.

For one reason or another various sourcing modes have proved more successful in particular industries and in particular locations. The relationship between technology trends and the emergence of expanding arrays of options around sourcing of product components and services offer one set of explanations, explanations such as the irresistible imperative of technology driven change or particular organisational structures. Other ways of understanding the success of sourcing through uncertain contextual conditions and processes of emerging knowledge adapting to and taking advantage of unique situations and knowledge.

An interpretation of global sourcing discourse that managers can use effectively should be more than the straight application of technological recipes, formulas, methods, rules, and organisational templates. Reflective actors will always seek to identify the interests involved, to be aware of who benefits (or looses) in order to juxtapose and evaluate among the various strategic decisions between in-house and outsourced delivery. Sourcing initiatives may proceed smoothly but if not what remedial measures can be employed addressing the organizational and technological issues relating to global sourcing?

The reflective manager has a broad palette of concepts and frameworks for interpreting and deciding sourcing cases. However this area of organisational operations is constantly evolving and changing and so the manager must be adept at identifying emerging trends in sourcing relationships that are likely to be important in the future with implications for current situations. In this way involved actors can merge theory with context, against a historical backdrop, extrapolate and justify the implications of changing sourcing arrangements in complex inter-organizational relationships.

Case: Bank of Ireland Outsourcing 2000-2011
Irish banks have, in the past, outsourced considerable operational activities to third-party providers. Bank of Ireland's multi-year deal with HP followed by the switch to IBM exemplifies one particular case of the benefits and risks of adopting a deep outsourcing strategy in a digital 'information' industry.

(24 February 2003: article-link) BOI license desktop and server software from Microsoft.
(4 April 2003: article-link) BOI announce 7 year deal with HP for IT services worth ~500M, over 500 bank employees to be transferred to HP.
(2 July 2003: article-link) BOI announce multi-million deal for banking software products.
(3 November 2010: article-link) BOI announce 5 year deal with IBM for IT services worth ~500M,


References
Oshri, I., Kotlarsky, J. & Willcocks, L. P. (2009) The Handbook of Global Outsourcing and Offshoring, Palgrave Macmillan.

Definition


An IS professional will...
"use systems concepts for understanding and framing problems... [A] system consists of people, procedures, hardware, software, and data within a global environment. ...[They design and implement technology solutions by] understanding and modeling organizational processes and data, defining and implementing technical and process solutions, managing projects, and integrating systems within and across organizations."
(Topi et al., 2010: 369-370)

Sunday 9 September 2012

Parameters of Development

THE PROJECT MANAGEMENT VIEW
PMBOK characterises project management in terms of three variables: Quality, Cost, Time.
In contrast our view on management of high tech development describes the work in terms of four key variables: Quality, Cost, Time and Scope.

Unfortunately the four variables are interdependent in complex ways, correlating both negatively and positively to each other. Have you ever tried finding maxima/minima on a curve? How about a surface? What about a 4-D surface? What if the variables have different and incomparable types (money, people, hardware, tools, feature, effort, complexity, internal dependencies, importance, priority, time, holidays, etc.).

COST
"more software projects have gone awry for lace of calendar time than for all other causes combined... but adding manpower to a late software project makes it later."
(Brooks Jr., 1995)
What do people do when a project slips behind schedule? "Add manpower, naturally." (Brooks Jr., 1995) Cost or its equivalence resources – things like money, people and equipment etc. are a necessary input to any project. Available resources include for example; the salaries of administrators, programmers, office space, computing hardware, software licenses, fast networks, and 3rd party services. Covering these costs and providing people and resources is a necessary prerequisite to project success but soon produces diminishing returns.

That is, all initiatives may reach a point beyond which the addition of further resources produces a diminishing return or may even degrade the project outcome. Why is this so? In the eponymous chapter of his influential book ‘The Mythical Man Month’ (1995) Fred Brooks makes the point that the theoretical unit of effort used for estimating and calculation project schedules is "not even approximately true [for] systems programming."
"the man-month as a unit for measuring the size of a job is a dangerous and deceptive myth. It implies that men and months are interchangeable." (Brooks Jr., 1995)
Brooks’ explanation, is that the idea of an ideal man-month, is only useful as an effort estimation technique if the task is perfectly partitionable and requires no communication whatsoever with others.
From Scrapbook Photos
Figure: Project success as a function of available resources
In the worse case situation a task (‘A’) cannot be partitioned and will take exactly as long as it will take regardless of how many (or few) people are assigned to it. Partitionable work is work that can be divided evenly among an arbitrary number of workers thereby allowing the task to be completed in the shortest possible time by adding more workers.
CASE: Imagine delivering and collecting census forms from 1000 households. Census collectors can discuss and plan in advance who will deliver and collect from which households. The activity of planning adds a finite amount of time to the collection.
A single census collector would need to make at least 1,000 trips (waiting for the form to be completed if the residents are at home).
Ten census collectors would need to make at least 100 trips. Additional time may be required to re-coordinate if collectors double up on the same address etc.
If however the task cannot be partitioned perfectly (some citizens aren't home, need help filling in the form, a census collector is out sick) the collectors need to spend more time communicating and coordinating closely with each other. As the number of collections increases they reach a point beyond which adding additional workers imposes a communication/coordination overhead that in turn delays the work.
From Scrapbook Photos
Figure: Completion time versus number of workers (adapted from Brooks Jr., 1995)

Tasks on high tech projects, almost by definition, involve complex interrelationships with other tasks that in turn demand a high degree of intercommunication between workers. Consequently high tech projects reach a point beyond which adding more people will result in the project delivering later (or not at all) rather than earlier. Understanding the degree of interdependence between project tasks in systems development highlights the need for communication in coordinating team members. It suggests that systems development projects are complex and difficult to manage.

TIME
"How does a project get to be a year late?... One day at a time."
(Brooks Jr., 1995)
Time is a crucial dimension of production activity. It turns out that an appropriate time line is a huge enabler for a project. However too aggressive a time target dooms a project to undue hast, unrealistic delivery times and, potentially, failure. Similarly, an excessively long time frame can defocus a team’s attention and starve the project of valuable feedback and checkpoints (figure below).
From Scrapbook Photos
Figure: Project success as a function of available time.

Time to delivery falls into three categories: too little leading to unrealistic schedules and delivery expectations; too much leading to analysis paralysis or gold plating; and just enough, when work is delivered, often incomplete, but early and usable enough to give useful feedback to both the user and developer.
CASE: In 2002, Mitch Kapor, the creator of Lotus 1-2-3 brought together a group of people to build his dream, a new class of software that redefine how people kept in touch with each other and managed their time. At the time some thought his OSAF was building an open source replacement for Microsoft Exchange but Kapor wanted something much more radical, a distributed mesh-like system that could collect and transform and share generally unstructured data for ideas and calendar items (Rosenberg, 2007). Towards the end of 2008 the project was nearing the end of the financial support that Kapor and others had provided. The paid programmers and contributors have gradually moved on leaving the project in the care of volunteers from the open source community. The software project, code-named Chandler, was funded by charitable contributions amounting to over 7.8 million USD. The project delivered preview versions over 2007/2008 but had finally run out of money, energy and time.
Two practices usefully address the problem of managing time, iterations (or timeboxes) and milestones. Milestones and timeboxing are essential approaches to managing time when project tasks are complexly interrelated and require developers coordinate and communicate closely with each other. Milestones are the large-scale markers for the completion of major stages in a project. The classic waterfall project is broken into stages, a stage-gate model, where the project transitions from one state to another. McConnell (1996) states that milestones are good at providing general direction but are usually too far apart and too coarse-grained to be useful for software project control. He suggests using 'miniature milestones,' small one or two day tasks that can be finished and demonstrated.

An iteration or timebox creates an achievable conceptual boundary for the delivery of multiple work processes and is recognized as good practice for software projects (Stapleton, 1997). In recent years the concept of the iteration has been refined to be a release of new useful functionality developed over a one to four week duration that a customer can use and test (Beck, 2000). The key is to arrive at an appropriate timebox for the project. A timebox of several days or weeks can be considered an iteration or incremental delivery stage (see the section on Software Lifecycles). The key value of using milestones and release iterations is that they are opportunities for feedback; clear, unambiguous feedback.

SCOPE
A written statement of project scope and objectives is often a project’s start point. The scope may describe the problem area being addressed, necessary and desirable features. Project scope will expand over time to include detailed features (figure below).
From Scrapbook Photos
Figure: Project success and value creation as a function of scope.

The desired scope or feature list of a project should be clear and concise. Too large a list of features or feature creep generates problems of priority and coherence. A concise set of the most crucial features probably has a stronger (positive) influence on the underlying architecture of the product. Furthermore "less scope makes it possible to delivery better quality" (Beck, 2000) 'I want it all and I want it now' is simply not reasonable. Consequently scope must always be limited, refined in some way. It is essential therefore that feature requests be valued and prioritised in terms of time importance, and realistically estimated.
"For software development, scope is the most important variable to be aware of. One of the most powerful decisions in project management is eliminating scope. If you actively manage scope, you can provide managers and customers with control of cost, quality, and time." (Beck, 2000)
Requirements will usually appear to have a natural or priority; what is most important, a prerequisite, a 'must have', a 'nice to have'. MoSCoW rules can be used to help expose priority (Stapleton, 1997)
Mo Must have
S Should have
Co Could have
W Want to have but not this time round
A perhaps unexpected consequence of product scope statements is the relationship between Scope's features and the eventual system design or architecture over time. This has implications for team structure, implementation architectures, and functional behaviour among others. The often close mapping between detailed requirements and the end design raises a risk that the user interaction model for the finished product will be strongly linked to or influenced by the underlying implementation model or technical architecture of the product. The end result is that a requirements document can overstretch its own 'scope' and verge into prescription for the eventual technical design.

Consider the following headings from a template for a single software requirement (Pressman, 2000).
Requirements definition: A clear, precise statement of what the user requires the system to do.
Statement of scope: State the goals and objectives of the software.
Functions: Functions or functional description of how information is processed.
Information: Information description (data) depicting data content flow and transformation.
Behaviour: Behaviour or interface description depicting control and change over time within the operating environment.
Validation criteria: What tests demonstrate valid operation and behaviour.
Known constraints: Procedural, legal, environmental, compatibility etc.
QUALITY
"Quality is a terrible control variable"
(Beck, 2000)
Finally quality! However quality might be defined we should keep in mind that a definition of quality is a non-trivial exercise. Quality is usually highly contextual, situated in a prevailing culture of what constitutes good or bad quality. In the case of software the product (or service) is not a physical good and so does not wear out in the way that hardware does. Hardware degrades over time due to physical wear and tear, breaking down and mechanical or physical failure. Software still fails and so it undergoes maintenance work to fix or enhance it over its economic life. For the purpose of a particular project the product’s quality is generally a negotiated concept.
From Scrapbook Photos
Figure: Project success as a function of quality

Measures of product quality (open bugs, stability, user satisfaction, speed, scalability) may be identified in order to lock down the release date or one of the other variables. But the cost of treating quality as the control variable in order to satisfy a release date is often negative in the long run. Compromising quality affects pride-in-work, it erodes customer confidence, and undermines your credibility and reputation. Don’t deliver something you know hasn’t been tested, or fails the tests; quality should be used to set thresholds and targets, using it as a control variable undermines and destroys the values we all aspire to.

AN AGILE TAKE ON THE ECONOMICS OF DIGITAL MEDIA
Kent Beck proposed a reinterpretation of the conventional wisdom on the increasing cost and complexity of software over time (Beck, 2000). The traditional logic of the increasing cost-of-change and steadily increasing complexity over the life of a software project is the motivation for conducting exhaustive up-front analysis. This also accounts for the conventional wisdom of resisting change at the later stages of a development life cycle.
However Kent suggested that the contrary view is the norm and further, that accommodating and responding to change is the normal condition for software projects. He claimed that an 'adapt to change' model should instead guide the management of software development, i.e. implement only what is needed now, check then correct before moving on to the requirement needed next. This process of deliver, correct, deliver, correct, continues for the entire life of the system, even after deployment or being put into production (see below). If the product or service is delivered digitally then distribution to customers can be made an almost trivial process. While the work of applying and using updates shifts to the customer, even the update and deployment processes can be gradually streamlined to facilitate customers who choose to update. Further, if the product is delivered as an on-line service then deployment reverts to the development organisation and a customer's use can be perceived as continuous and unimpeded by regular releases even when training may be needed to use new functionality.
From Scrapbook Photos
Figure: The cost of change over time: Traditional vs. Agile view

Likewise for design complexity, traditional software development invests massive effort in up-front design and requirements analysis, and allows relatively little revision or change during development and no change after deployment. (Beck, 1999) This results in a tapering off of design complexity over time. The initial design starts out relatively complex as much of the architecture and design is done before coding commences. The architecture remains static while code complexity gradually increases and then ceases to change as the product is finalized (see below).
From Scrapbook Photos
Figure: The increase in design complexity over time: Traditional vs. Agile view

Because developers invest only as much up-front design and requirements analysis as is necessary to deliver the minimum required functionality first, they ensure that design complexity increases gradually rather than abruptly. The most valuable features are delivered now because that’s when they are needed, other features will be identified and refined as the project proceeds. The architecture and design complexity will appear to grow organically as new requirements are implemented. An additional process termed ‘refactoring’ is also applied. Refactoring anticipates that earlier design elements may need to evolve whole project gradually expands. Furthermore we encounter occasions when product redesign without additional feature development is needed in response to evolving non-functional requirements like stability, scalability, usability, security etc). Refactoring as a process also acts as a brake on continuously increasing design complexity. Effective Refactoring often produces desirable redesign with the goal of achieving 'design simplicity'.

Traditional projects front-load as much cost (effort in design and requirements gathering) as possible, anticipating that they’ll understand the problem early and select the correct solution. Waterfall exemplifies this approach. Agile approaches usually implement only what is needed now, check-then-correct, before moving on to develop the next requirement and so on; and this continues for the entire life of the software, even after deployment or being put into production


SUMMARY
  • Cost, +/- it can cost more or it can cost less
  • Time, +/- it can take t, or 2t, or t+n. So how long is a piece of string? How long will the software be used? Is this the first release of many?
  • Quality, an artisan strives for quality, the inherent value and appreciation of things being made, pride in solving a difficult problem, in producing an elegant solution. Quality should not be treated as a variable. Instead quality is an indicator of the success or failure of our ability to balance the dynamic interactions between cost, time and scope.
  • Scope, +/- you can have many or fewer features, the goal here is to go for the features you really need now, leave the other stuff till later. Don’t deliver now what you can put off to a later iteration.