Author Archives: Cliff Moyce

About Cliff Moyce

Freelance business writer, speaker and board advisor. Former CEO, COO and CTO in financial services and capital markets. Available to produce white papers, articles, website copy and other marketing collateral relating to capital markets, financial services and financial technology (artificial intelligence, Blockchain, Cloud, Big Data, regtech, cyber-security, business transformation etc). Please feel free to get in touch with me at

Service oriented architectures and web services as a solution to legacy IT problems

Cliff Moyce, October, 2015.

When computing became ubiquitous in administrative environments in the late 1980’s and early 1990’s it was welcomed as an opportunity to improve the efficiency and effectiveness of business processing.  Manual or semi-manual business processing at that time was noted for its inefficient hand-offs, checking, and duplicated effort as well as storage problems.  And yet 30 year later we look at extant (‘legacy’) IT systems architectures as representing the biggest barrier to productivity in some types of organisation.  For example, large banks now spend nearly 50% of their operating budgets on IT – and yet it is IT configured in ways that would horrify any student of process design.  Eg multiple systems (sometimes meaning twenty or thirty, not just two or three) doing the same thing; forced ‘integration’ between systems requiring software, middleware and hardware that should never have been required in the first place; inconsistencies between systems meaning reports have to take an aggregate of all outputs rather than relying a golden source, etc.  Attempts to rationalise the architecture by building a single new system to replace multiple old systems often results in yet another system being added to the pile.  Support costs are high as people struggle to manage and resolve the complexity, risk and issues.  What to do about these problems is a long running debate (eg de Souza, 2015; Preimesberger, 2014; Matei, 2012).  One approach that is often espoused is to design and implement a new, more modern architecture using a radical clean slate / blueprint style approach (eg Marchand & Pepper, 2015).  While recognising the temptation to start again, this article asserts that big-bang approaches to legacy IT systems replacement can be naive, expensive and fraught with risk. Instead, pragmatic approaches that can deliver improvements using what exists currently are preferred and recommended.  As well as discussing technologies that can enable such approaches, this article considers the cultural and organisational implications of adopting these methods.

The debate on legacy systems in some organisations is intensifying as expectations for cost efficiency, flexibility, and usability increase.  Legacy architectures are typically described in articles and presentations as unplanned; complex; poorly understood; slow and expensive to operate, support and enhance; old fashioned in their interfaces and reporting capabilities; hiding redundancy; difficult to monitor, control and recover; susceptible to security problems; and, hard to integrate with newer models and technologies such as cloud computing and mobile devices:  “Even minor changes to processes can involve rework in multiple IT systems that were originally designed as application silos” (Serrano, Hernates & Gallardo 2014).  Getting old and new applications, systems and data sources to work seamlessly can be difficult, verging on impossible.  This lack of agility means that legacy systems in their existing configuration can be barriers to improved customer service, satisfaction and retention.  In regulated sectors they can also be a barrier to achieving statutory compliance.  Pressure to replace these systems can be intensified by new competitors who are able to deploy more modern technologies from day one.

Explanations for problems associated with legacy architectures include excessive complexity arising from a post-hoc need to integrate systems that were originally designed to be autonomous; poor knowledge of systems due to lack of documentation and loss of original development teams; individual applications growing ‘like Topsy’ as new functions and modules are bolted on to meet customer demand; use of technologies, models and paradigms that are now outdated; duplication arising from multiple systems doing the same thing, etc.  ‘Local initiatives’ are sometimes argued to be partly to blame for the situation (eg Marchand & Pepper, 2015) as business lines or functions commission their own system builds or buy package implementations, perhaps with little regard to integration and support issues.  Many of these explanations for the problem could be summarised as ‘customer requirements taking precedence over architectural integrity’, but many people (especially the customers) would prefer that to the converse.  Amusing analogies such as the possible negative consequences of living in an unplanned house that has been extended many times are sometimes used to encourage audiences to take a complete re-design approach to solving the problem (Marchand & Pepper, 2015).  By such an approach it is argued that customer service can be improved and complexity, duplication and risk reduced.  These are all highly laudable and valid aims, but how easy is it to design and implement a new IT architecture in a large mature organisation with an extensive IT systems estate?  Eg in a large bank with huge real-time transaction processing demands that has grown organically, and also grown by acquisition?   Rather than the unplanned house analogy, a better analogy might be a ship at sea involved in a battle.  Imagine if you were the captain of such a ship and someone came onto the bridge to suggest that everyone stop taking action to evade the enemy and instead draw up a new design for the ship that would make evasion easier once implemented.  You might be forced to be uncharacteristically impolite for a moment before getting back to the job at hand. 

At some point, many large organisations have attempted the enterprise-wide re-design approach to resolving their legacy systems problems.  Many such initiatives are abandoned when the scale of the challenge or the impossibility of delivering against a moving target become clear.  Time has a nasty habit of refusing to stand still while you draw up your new blueprint.   Re-designing an entire architecture is not a trivial undertaking, and building / buying and implementing replacement systems will take a long time.  Long before a new architecture could ever be implemented the organisation will have launched new products and services; changed existing business processes; experienced changes to regulations; witnessed the birth of a disruptive technology; encountered new competitors; exited a particular business sector and entered others.  All of these things conspire to make your redesign invalid before it is live.  If you are lucky, you realise the futility of the approach before too much money has been spent.  Furthermore, the sort of major projects required to achieve the transformation are the sorts of projects that run notoriously high failure rates: “In just a twelve month period 49% of organizations had suffered a recent project failure” (KPMG, 2005); “Only 40% of projects met schedule, budget and quality goals” (IBM, 2008); “17% of large IT projects go so badly as to threaten the very existence of the company” (McKinsey and Company, 2012).

So if wholesale blueprinting and re-engineering is impractical, what can be done to solve the problems of legacy architectures?  The first thing to say is that trying to fix all of the problems at the same time is a logistical impossibility in anything but the smallest companies, and bears a high risk.  Many organisation would not have the resources to accommodate the large spike in project effort.  Problems always need to be tackled in priority order as there is rarely a silver bullet for the whole job.  Luckily there are some practical and cost effective approaches that can mitigate many of the problems with legacy systems while obviating the need to replace any of the systems.  Two of these approaches are service oriented architecture (SOA) and web services (Cabrera, Curt, & Box, 2004; Li, Huan, Yen & Chang, 2007; Mahmoud, 2005; Serrano et al, 2014). Used in combination, they offer an effective solution to legacy systems problem.  

SOA refers to an architectural pattern in which application components talk to each other via interfaces.  Rather than replacing multiple legacy systems it provides a messaging layer between components that allows them to co-operate to a level you would expect if everything had been designed at the same time and was running on much newer technologies.  These components not only include applications and databases, but can also be the different layers of applications.  Eg multiple presentation layers talk to SOA and SOA talks to multiple business logic layers – and thus an individual prevention layer that previously could not talk easily (if at all) to the business logic layer of another application can now do so.   

Web services aims to deliver everything over web protocols so that every service can talk to every other service using various types of web communications (WSDL, XML, SOAP etc).  Rather than relying on proprietary API’s to allow architectural components to communicate, SOA achieved through web services provides a truly open interoperable environment for co-operation between components. 

The improvements that can be achieved in an existing legacy systems architecture using SOA though webs services can be immense, and there is no need for major high risk replacement projects and significant re-engineering.  Instead organisations can focus on improving cost efficiency by removing duplication and redundancy though a process of continuous improvement, knowing that their major operations and support issues have been addressed by SOA and web services.  Another benefit is that the operations of the organisation can start to be viewed as a collection of components that can be configured quickly to provide new services even though the components were not built with the new service in mind.  This is the principle of the ‘composable enterprise’ (Murray, 2013).

But addressing the issue of legacy systems in a way that makes good sense is not just an IT issue, it is also a people issue.  It requires people to resist their natural inclination to get rid of old things and build new things in the mistaken assumption that new is always better than old.  It requires people to resist the temptation to launch ‘big deal projects’, for all of the reasons that people launch big deal projects – from genuine belief that they are required (or the only way), to it being a way of self-promotion (and everything in-between).  It requires people to take a genuinely objective view of the business case for change, while operating in a subjective environment.  It requires people to prioritise customer service over the compulsion to tidy up internally.  And, it requires the default method of change to be continuous improvement rather than step change projects – which can be counter intuitive in cultures where many employees have the words ‘project’ or ‘programme’ in their job titles.  But this is all easier said than done when you are dealing with people in a real life organisation where certain skills and behaviours have been valued highly for years.  It is not an overnight job to get people to realise that it is those skills and behaviours that are contributing to their problems.  Resistance to change should be expected.  In fact, as long as resistance is overt it is a good thing because at least people are engaging and opening themselves up to discussion and the possibility of learning (Moyce, 2015).  Getting to the point where legacy IT architecture issues can be handled in the best possible way will involve many of the common aspects of organisational change – education; developing new skills; adopting different mind-sets; using multiple rather than single methodologies; and, basing the choice of method on the reality of the situation rather than on custom and practice.  The popularity of agile methods means that continuous improvement using iterative rather than step-change approaches is in vogue again.  

To summarise, resolving the problems of legacy enterprise IT system architectures can provide significant gains in productivity, efficiency, agility, and customer satisfaction.  For that reason the endeavour should be a high priority.  However, there are many risks attached and this type of work needs to be approached in a way that is highly mindful of those risks.  After all, the systems are business critical – not only to the organisation that own and operate them, but also critical to the businesses of their clients their clients.  Luckily we now have technical tools and approaches available to effect radical improvements without having to incur the expense, effort and risk of major replacement projects.  But using these tools comes with a change of mindset and approach that may be counter-cultural in some organisations.  It can mean a move away from step-change and ‘long-march’ projects, and a move towards continuous improvement.  Education and engagement will be one of the keys to making it happen. 

Cliff Moyce

13 October 2015


Cabrera, L.F., Kurt, C., and Box, D. (2004).  An introduction to the web services architecture and its specifications.  Last retrieved 30th June 2015 from 

IBM (2008).  Making change work.  Last retrieved 5th September, 2015 from

KPMG (2005).  Global IT project management survey.  Last retrieved 5th September, 2015 from

Li, S.H., Huang, S.M. Yen, D.C., and Chang, C.C. (2007).  Migrating legacy information systems to web services architecture.  Journal of Database Management, Oct-Dec 2007, 18, 4, 1-25.

Mahmoud Q.H., (2005). Service-Oriented Architecture (SOA) and Web Services: The Road to Enterprise ApplicationIntegration (EAI).

Marchand, D.A. and Pepper, J. (2015). Firms need a blueprint for building their IT systems.  Harvard Business Review (June 18, 2015).  Last retrieved 22 July 2015 from

Matei, C.M. (2012).  Modernization solution for legacy banking system: Using an open architecture.  Informatica Economica, 16, 2, 92-101.

McKinsey and Company in conjunction with the University of Oxford (2012).  Delivering large-scale IT projects on time, on budget, and on value.  Last retrieved 5th September 2015 from

Moyce, C.L. (2015).  Resistance is useful.  Management Services, 59, 2, 34-37.

Murray, J. (2013).  The composable enterprise.  Last retrieved 22nd July, 2015 from

Preimesberger, C. (2014).  Updating legacy IT systems while mitigating risks: 10 best practices.  Last retrieved 5th September, 2015 from 

Souza, B de. (2015). Enterprise architecture and the legacy conundrum.  CIO (13284045).  Last retrieved 16 July 2015 from

Serrano, N., Hernantes, J., and Gallardo, G. (2014). Service oriented architecture and legacy systems.  IEEE Software, 31, 5.

Cyber security: how can we turn the corner?

Cliff Moyce: 15 April 2016

Companies that manage data rely on customers being confident that their data (including sensitive / confidential / secret personal details) will be held safe and secure. If this backbone of trust is broken, those using their systems will simply stop doing so. This applies at both a corporate and at a consumer level. The particular sensitivities and high level of personalisation and visibility that characterise many modern enterprises make privacy vital for businesses’ continued existence. Despite the importance of customer confidence in data security, there have been several high profile cyber security breaches in the past two years in which enormous amounts of sensitive data were stolen. Hundreds of other breaches have occurred in the same period, they just haven’t made the headlines (in some cases, deliberately so). Companies that have suffered losses of customer data include JP Morgan Chase, Talk Talk, Anthem, Ashley Madison, Patreon, and LastPass. Some of the problems suffered have been so severe as to threaten the future of the company. In 2016 organisations will be keen to ensure they do not suffer the same problem, but how will they achieve that aim? One important step will be for organisations to forget the misconception that data losses are usually the result of technology weaknesses and failures. In fact, it is human failings that are far and away the most common cause of what the press often describes as ‘hacking’. Developing security policies to mitigate the people-risk in cyber security is no longer enough. In fact, it was never enough. Such policies risk being treated as tick box exercises, or are created with good intent but are undermined by a culture of poor practice. Education and training in security policies is essential – but even that can fail if the necessary culture change does not happen. This is where the most important change needs to happen in 2016 to avoid repeating the mistakes of 2014 and 2015. All employees need to be trained and examined on best-practice for cyber-security and data-protection.

One important area that is often overlooked is the risk of individuals falling victim to social engineering outside of the workplace. Their compromised status can then follow them into their organisations. It is vital that all staff understand how email attachments, phishing, and impersonations can be used to install malware devices to personal devices that are also used for work purposes. By this method, login credentials to their corporate network can be lost to ‘bad-actors’. At JP Morgan Chase it was an employee’s personal desktop computer that was infected. When that individual logged-in remotely to the corporate network via the company VPN in June 2014, the malware obtained access rights to the network. Human errors that had happened previously at JP Morgan (including forgetting to update security software on one server out of thousands) made it possible for the hackers to gain control of 90 servers and huge amounts of data, and steal large amounts of money from JP Morgan clients.

If companies invest in the right training and education for their people, it will result in a renewed faith in data security. This would be a breath of fresh air for a world that is becoming increasingly wary of modern enterprise’s ways of working. One ray of hope is that many organisations are now establishing better security standards and looking for new ways to create more private and secure methods of communication and engagement. Hopefully the outcome will be that people will start to feel more confident in using the apps and services that have so much to offer in terms of personal productivity. But will these improvements represent a triumph for everyone? Sadly, no. The unfortunate loser of tighter security and greater awareness will be the advertising industry, though possibly only temporarily. For advertisers, new security standards will mean that they have to invest in less intrusive forms of advertising. Hopefully that will eventually work for them as well as their current methods do currently.

To finish on a cliché: every problem is also an opportunity. With knowledge will come greater online security, more educated users of technology, and (even) more sophisticated advertising!

This article was published originally at on 15/4/2016

Cliff Moyce

customer before process

I hope I can be forgiven a little anecdote about my private life in this article, which does quickly turn to the subject of business.  I recently had knee surgery to remove the broken bits caused by my years of competing at weightlifting and powerlifting (they give you a voucher for surgery with every trophy in those sports…).  At the pre-op assessment a week before the procedure they threatened to bump me to the bottom of the waiting list if I refused to confirm what had been entered onto their computer system – ie surgery was taking place on the left knee.  I couldn’t do that as it was the right knee that was broken, and in the end I had to contact the surgeon to pull rank.  The same thing happened on the day of the operation, even when I gently tried to suggest that as my right knee was clearly very swollen, I was using a walking stick on my right side, and I was clutching an MRI of my right knee, I might actually be right.  The pitying looks I got were classic.  Usefully, this got me thinking about what lessons for business could be learned from these incidents.  My conclusions are below.

The first lesson is the potential negative effects of creeping ‘processisation’ (apologies to Shakespeare!).  As a founder member of the Institute of Business Process Re-engineering I have a keen interest in how processes can make or break a business.   However, that does not mean that I value business processes over everything else.  What I saw at the hospital was experienced nurses who were in danger becoming slaves to process and who gave IT systems more respect than they are due.  Last time I looked, IT systems had not fully conquered the garbage in / garbage out problem (which is what  had happened in my case through a perfectly forgivable human error).  When I started out on my career, it was often a lack of formal documented processes that made things difficult, but these days we have processes for everything – including processes for processes (aka processes management).  That is by and large a good thing, until following a process becomes a substitute for common sense.

The second lesson is the possible negative consequences of not listening to what the customer wants; a problem is often driven by a mistaken belief that you know what the customer needs better than they do.  This has been a dominant theme in the years that I have been working on company and project rescues.  I can give one real example safely because it was a few years ago, turned out well in the end, and the directors of the company have now retired.  The company had been very successful in the engineering sector with one flagship product and some ancillary products and services.  When the founder retired, a new CEO turned up with the attitude of ‘everything is crap’ and ordered that the flagship product be replaced with something more ‘modern’.  The sales team were told to go and spread the word about this forthcoming ‘silver bullet’ product that would do everything and more…  Unfortunately the company had made three bad (nearly fatal) mistakes:

  • nobody ever asked the clients if they wanted the main product to be replaced, or whether their own production systems could accommodate a major and discontinuous change
  • realising that they were on a sticky wicket, the sale people tried to justify the change to clients by saying that the current offering was ‘broken’, ‘not modern’ etc
  • building the replacement product was outsourced and became a long running failing project that even made it into the trade press

Having been told that the current product was no good, and with no sign of the replacement product arriving any time soon, clients started to drift away.  By the time my company was asked to get involved (we specialised in company rescues) the client was already in administration. Therefore, there were lots of legal and financial issues to be resolved by my colleagues while I led a campaign to keep the current clientele.  As part of the rescue I used my project management experience to recast and deliver the new product based on true customer requirements and using the in-house personnel rather than the expensive external consultants (they had gone anyway by that stage).

I know I can seem like a stuck record when banging on about the need to focus on customer requirements, and to really listen to what they are telling you, but when so many of the problems you have resolved during your career have been caused by a failure in this regard, it does become a bit of an obsession.

BTW the nurses, doctors and surgeons did a great job and retain my undying gratitude. I hope they will forgive me for using a minor blip to make a broader point.


Diversity in the professions

Avoiding discrimination in recruitment is both a moral responsibility for company boards and also makes good business sense.  But did you know that diversity in terms of social class and educational background is decreasing in the professions in the UK, with people educated at independent fee-paying schools now comprising 70% of finance directors, 50% of solicitors, and 45% of top civil servants (Panel on Fair Access to the Professions, 2009)?  This despite independent schools only teaching 7% of our children, and 18% of children over the age of 16 (Hensher, 2012).   As a result, working class students are in the minority at almost all English universities, with over 80% of students at the (arguably) ‘top’ nineteen Russell Group universities in England coming from fee paying schools and colleges (The Sutton Trust, 2008). Oxford and Cambridge present an even more extreme example: “Four private schools and one college get more of their students into Oxbridge than the combined efforts of 2,000 state schools and colleges” (Milburn 2012).  This bias translates directly to entry into the professions; with 82% of barristers and 78% of judges in 2005 having studied at ‘Oxbridge’ (The Sutton Trust, 2005).  I researched this topic for my masters degree at Birkbeck, interviewing lawyers in City of London law firms who had come from working class, state educated backgrounds (an increasingly rare breed).  My report is available on request.  You won’t be surprised to hear that these were the sort of people who had battled hard – and successfully – to overcome barriers that would not exist at all in a fair society.  You may be a little more surprised to hear (or perhaps not) that strong willed, encouraging mothers played a big role in many of their lives – in many cases mothers who had been denied a good education because of lack of money in the family and therefore needed to leave school at the earliest opportunity to start work (as was the case with my mother).  We may never be able to achieve the utopia of a truly fair society, but as managers and directors with responsibility for hiring people we can be aware of the problem and at least do our best to avoid propagating it.  As a hiring manager and director I have been wrestling with that problem for much of my career (you can probably guess which side of the tracks I come from), and I can offer you a few tips that might help you if you want to make a difference:

  1. The most important step is to decide that you want to make a difference by hiring fairly.  Once you make that decision, the rest follows naturally.  You may find it harder to convince your colleagues – and some colleagues will never be convinced – but don’t let that put you off.  You will also find that hiring fairly is much harder work than you realised.
  2. Look at how you recruit people and decide whether there is inbuilt discrimination (perhaps unwitting).  That doesn’t mean looking at what is written into your hiring policies, but looking at how recruitment is done in practice (the unofficial process).  Are you or your managers and team leaders rejecting applications because of university attended, ie not treating all universities as equal?  Are you rejecting on the basis of ‘A’ level grades even though there is much research showing us that two children of equal capability will get different results depending on whether they went to a state school or an independent school?  Are you insisting on a university degree of any subject – suggesting that there is no real vocational educational requirement for your empty post?
  3. Research methods of selection and assessment that evaluate the capability and potential of the person regardless of background.  Perhaps hire an organisational  psychologist who specialises in this field to advise you.
  4. Look at your supply chain.  What practices are being employed by your external recruitment agents?  Look at the adverts they are posting online.  I once found one agent posting the following at standard “If you haven’t attended a red-brick university then don’t bother applying”.  Just before I terminated our relationship I pointed out to him that there are only six red brick universities in the UK (all in England and all still true to their original ideals) but they do not include some of the universities to which I suspect he was aspiring (Cambridge, Oxford, University of London, Durham, Bath ….etc).
  5. Report on diversity by social class and educational background at your company.  Take pride in telling people that you aim to reflect the make up of UK society as a whole, and not just a privileged part of society.
  6. Support the work of The Sutton Trust educational charity in going into state schools and opening the eyes of young people to opportunities that await them at your company.

Good luck to anyone who is doing this or wants to try and do it.  I am happy to talk or correspond on the subject.

Cliff Moyce

December 2013


Management education

I have always been bothered by the fact that many people who become team leaders or managers in the UK do so with little formal education or training in the subject.  We cannot drive a car on the road without passing a test that would be almost impossible to get through without an investment in formal training by a qualified instructor, so why do we think someone can manage people, companies, budgets, projects, clients, partnerships, industrial relations etc without a proportionate level of management education?  Is it any wonder that we see such high levels of stress among managers, often underpinned by feelings of not coping?

Despite the potential downsides of being a manager, it is important to remember that it is also an incredibly rewarding and fulfilling career, and I feel that the difference is often simply down to preparedness.  Have you been educated in being a leader and are you on a continuous programme of management development?  If you are then I think the chances of success as a manager and director are higher and the chances of suffering stress through feelings of not coping are lower.    I believe strongly that formal management education is an important factor in successful management careers, and I want to explain why.

Though we have more people participating in higher education than ever before through the university route, the decline of the company funded part-time HND in the UK as the enabler of career progression has meant that training in the major practical areas of management has declined.  This because many HND subjects included management training as standard.  Though I studied chemical engineering at university after school, it was a subsequent Diploma in Management Services that taught me about leadership and motivation, organisational design, financial management, personnel management, industrial relations, productivity, logistics etc.  I have always regarded that part-time two year course as the best education that I ever had, and I doubt that I ever thanked my employer sufficiently for paying the fees and giving me the time off.  I certainly do not believe that I would have picked it all up along the way if I had not enrolled.  In fact, formal management education based on a body or research often teaches us that things that seem like they will work can have unexpected negative consequences.  A common example of this phenomenon is occurs when psychometric assessments are used at work to classify people by personality ‘type’ (personality – let alone personality type – is a much disputed and debated construct in the world of psychology).  The notification of ‘type’ to the individual often resulting in them spending the next few years trying to behave to type and using certain characteristics of their type to explain their failures and less helpful behaviours.  Another example I have seen on a number of occasions is when a small number of staff are selected for ‘high potential’ or executive development schemes, only for the organisation to experience a drop in overall morale and performance as those not selected feel unappreciated and perform worse.  A final example (a favourite of the occupational pscyhologist Frederik Herzberg) is the poor correlation between pay and performance up to the point where people given an unexpected bonus for doing a piece of work that they particularly loved doing suddenly become demotivated when their creation is effectively reduced to the status of a commodity.  All three of these common management mistakes could be avoided if managers were put through formal management training, as all three are covered (in my experience) in one way or another by most academic courses in the subject.  In my own case, I was so fascinated by theories of leadership, motivation and organisational change that I went on complete an MSc in organisational psychology (so I now have hundreds of examples of do’s and don’t with which to bore the unwary!).  So fascinated, that I may even write some more detailed posts on the examples I have given with deeper explanations as to why they do not work, and further examples of when intuition without education can let you down.

To summarise, experience is a wonderful thing but I do not believe that you will ever become everything you could be as a manager if you do not combine it with a parallel path of formal management education and development.  And why would you want to when it can be so much fun?!

Cliff Moyce, December 2013


Why do projects fail?

Working in business transformation means that I have had the privilege to work on and lead many successful projects and programmes.  I confess that I find delivering a successful project and realising the hoped-for benefits to be very satisfying, both personally and professionally.

However, I would never claim that all of my projects went smoothly from start to finish.  Far from it.  Resolving issues is all part of the cut and thrust of discrete change, and sometimes it can feel like there is more going wrong than there is going right.  Despite this, the teams I have worked with have always managed to get there in the end.  But as well as delivering new projects from start to finish, I have been called in by clients on many occasions to quality assure and/or rescue failing projects.  Saving a failing project can be doubly satisfying of course, but not all projects are worth saving and then you have the pain of switching them off, losing the sunk cost (which can be tens of millions of pounds), and sometimes even losing members of staff.  Even when projects are saved, they can be extremely hard work and stressful.  There is often a lot of negative emotion around a failing project, with disappointed customers and sponsors throwing blame at exhausted and disheartened teams.  I once spent three years turning around and finally delivering a major project that had gone badly awry, and often questioned my own sanity in doing so.  With over 20 years of doing this sort of work, I thought I would share with you some of the common themes that I see in failing projects.  If all I do by writing this piece is to stop one project going awry, then I will be happy.

My major observation is that projects rarely fail because someone did bad work halfway through the project.  Almost every failing project that I have ever come across was hobbled early on at the definition stage.  A lack of clarity and objectivity about the true need for the project is a common theme.  For example, individuals or teams deciding for themselves what customers want and then going ahead and building it without really listening to the customers in the first place.  When I am called in to QA a project I work hard to identify who the the customer is, and then I go to see them (all of them if needs be – or certainly a representative sample).  I have long since stopped being surprised by the response from customers “All I really wanted was for them to fix the current system / process / service / product; but instead they have gone off and started a one / three / five year project to build a new one.  In the meantime I am supposed to keep using the broken one.”  Another variation on this theme is a lack of clarity about the problem that is supposed to be solved by the project.  Despite working on projects for much of my career, I know that projects are a difficult and risky way to deliver change, compared to continuous improvement, so should not be entered into lightly.  If you do not have a clear handle on the problem to be solved then how can you decide what is the best way to solve it (or even if the problem really exists, is important, and needs to be fixed at)?  A further variation on this theme is a lack of clarity on the desired outcome.  Some project management methodologies are so process heavy that it is easy to forget how conceptually simple the desired outcome actually is, and what you are trying to achieve.  We should focus on the baton not the runner as Craig Larman says. You will notice that I did not mention a lack of clarity about the desired solution.  That is because humans are very good at generating ideas for solutions – and that in itself can be a problem as many projects start with a solution and then go looking for a problem to solve.

Many of the problems listed above around clarity of need, clarity of problem, and clarity of outcome arise not from incompetence (though lack of experience can sometimes be an issue), but rather from the dreaded ‘Three P’s’ – people, power and politics.  For example, I often find that sponsors and other members of the governance board had doubts about the need / direction / leadership of the project from day one but either said nothing or more commonly said something once and were made to feel that they had said the wrong thing so kept quiet thereafter.  Unfortunately, projects are sometimes created to further someone’s personal agenda and these are often the most ill conceived of initiatives.   In theory, formal governance and project initiation procedures should stop these sorts of issues arising, but unfortunately these processes can often be exercises in box ticking where no real critical examination of the need etc takes place.  Where project definition processes fail to stop bad projects, the next step is to be the person who stands up and points out that the emperor has no clothes. Of course this risks you becoming a shot messenger, but the consequence for your organisation of not challenging the need / goal / approach for projects can be serious.

I hope my experiences give you food for thought.  Of course, there are many other reasons why projects fail – and even the definition of ‘fail’ needs to be unpicked – but the themes above have been fairly common in my work across various sectors and project types.


Agile in name only

I see a lot of implementations of agile product development methods that seem to me to be agile in name only.  They typically have not achieved a higher rate of success than whatever method was practised previously.  As a big fan of agile methods for many years, this bothers me.  The reputation of a great approach could start to suffer from a proliferation of poor implementations, and that would be a shame.

The current problem with agile is that everyone from individuals to outsource software development companies have to claim they do it to get hired.  However, there is often little investigation initially into their true level of understanding, and little follow-up to see if what they are doing subsequently is truly agile.  Often the people doing the hiring and buying are unsure themselves of what they should be looking for,

Using Scrum as an example (other agile methods are available!) the problem seems to be that people feel that if they practice one or two of the Scrum techniques (typically chopping their workload into two week packages – ‘sprints’ – and doing a daily stand up meeting – the ‘daily scrum’) then they are ‘agile’.  Nothing could be further from the truth.  The most important principles of agile are:

  1. The focus should be entirely on the outcome, not on the process.  In Craig Larman’s terminology “focus on the baton, not the runner”.  In Scrum terms that means every sprint should deliver a ‘potentially shippable product’.
  2. Teams should be genuinely multi-functional.
  3. Everyone should commit to the sprint.  Ie everyone should do whatever it takes to fully complete the sprint and achieve the outcomes, no matter what.  If it nearly kills you this time, then you will be better at estimating next time.

When I ask people why their sprints are not producing potentially shippable products, but instead producing software to be used at a later date, they tell me that the two week sprint is not long enough, and ‘the tester’ can’t fit all the testing into the end of the sprint.  That argument is the antithesis of agile thinking.  Sprints should be as long as it takes to produce a shippable product. Anything from one week to 12 weeks would be a guide.  If a sprint would need to go over 12 weeks to deliver a product, then sacrifice the principle (for once) and split the sprint in two to reduce the chances of storing up risk in a long cycle.  The other issue with this oft-heard argument is the belief that testing is done by a specific function or person(s).  Rigid functional divides have no place in agile.  If you cannot achieve the ideal of everyone being able to do everything (though the problem is usually lack of will rather than lack of ability) then it should at least be the case that developers should choose to start helping out on testing rather than looking to optimise their own personal productivity by doing work unrelated to the current sprint.  The best implementation of Scrum I have seen is in a highly successful investment bank where team members do everything (analysis, coding and testing) and genuinely work together to achieve the outcome.

I have implemented and operated agile (Scrum, XP and DSDM) for many years.  Before that I was (and remain) a big fan of lean methods arising from the ‘Toyota Way’ of continuous improvement.  Agile and Lean share a lot in common.  I have concluded from my various travails in getting agile to work that the surest way to achieve success is through targetted assessment and selection of people and suppliers.  With individuals I find that people who score highly on the Conscientiousness dimension of ‘Big Five’ personality tests always work well in agile environments.  They are the people most likely to not sit in their own self-created functional bunker.  Also, at interview, if people focus on telling you about how they did something rather than telling you about the outcomes and business benefits of their involvement in a team, then they are probably not what you need.  With suppliers, look for give-aways such as talk of functional roles (eg ‘dedicated testers’ and ‘dedicated scrum masters’) and overhead roles designed to hold artificial functional divides together (such as project and programme managers).  Such talk and organisation is not agile.

In conclusion, agile is a behaviour not a methodology.  That is why the ‘a’ is not capitalised.  Agile is a state of mind.  People who are agile look at everything from the customer’s perspective.  Beware of ‘agile in name only’.

Cliff Moyce

ps For Scrum, I highly recommend the books and services of Craig Larman, and Ken Schwaber

Reducing the need for projects through continuous improvement

If we know anything about projects, it is that they carry risks and cost money.  Sometimes significant risks and a great deal of money.  If the most serious of risks become manifest (eg going over budget on time and money by significant amounts) then the consequences can be serious.

Despite the high rate of project failures being reported in business and academic journals, our appetite for projects continues to grow unabated.  Though project management has been good to me (I owned and ran a project management consultancy for 16 years), I know that the need for many of the projects that I have led could have been avoided if firms had taken a different approach to achieving and maintaining operational excellence prior to calling me in.  Specifically, by creating a culture of continuous improvement, organisations can avoid ‘falling behind the curve’ relative to their respective environments (markets, competitors, customers, regulators, etc).  Allowing operating models and IT systems to stand still while everything changes around them will almost inevitably result in a need for ‘all or nothing’ projects and programmes at some point, with the attendant risks and stresses that comes with such projects.

When I started out in my career, the emphasis was very much on continuous improvement (CI).  Undertaking my studies with the Institute of Management Services, we were only taught CI as a way of improving operational efficiency and effectiveness; project management was never mentioned.  Though CI techniques have been around for a long time they increasingly started to be named and popularised from the 1960‘s onwards under titles such as Kaizen (the Japanese phrase for Deming’s plan-do-check-act cycle); Total Quality Management, Lean, the Toyota Way, and Six Sigma.  For anyone unfamiliar with these approaches, spending time learning about them can be highly worthwhile.  As someone who has used almost all approaches on the change continuum (from the most continuous to the most discontinuous) I know that CI provides the most sustainable, maintainable and correctable operating environment.  The approach applies equally well to organisational design, business process redesign, and updating IT systems, and works best when applied to all three at all times (with measurement focused on the customer perspective).  An operating environment subject to the constant attention of CI will not only keep up with the curve, but will push it forwards (as Toyota has demonstrated) and ensure maximum customer satisfaction.

Of course, we will always need projects.  Some change is discrete by nature.  Civil engineering and ship building are good examples – though they also use CI methods to improve their processes.  CI can itself drive a need for projects on occasion if the implementation of change requires a significant level of control.  Generally, the iterative nature of change that is driven by CI means that implementation can be handled by line staff more often than not, and specialist project managers are not required.  That approach should certainly be the default.  Six Sigma is particularly good at spreading the necessary skills across the organisational structure.

It might feel that developing IT systems is better suited to a project management approach over CI.  This is generally true, but many of the IT projects that I have led could have been avoided (or the scale, cost, risk and urgency of the development reduced) if the implementation of the previous / existing system had been accompanied by a plan that did not end on go-live day,  but instead carried on to ensure that all benefits  were realised, and that support and maintenance was driven by business horizon-scanning rather than just technical needs.  A useful middle ground is agile development methods such as Scrum and XP where new systems are built and maintained by self-managing cross-functional teams without the need for project management specialists.

To summarise, I believe that continuous improvement is the best way to ensure that a business will continue to succeed even when the environment is changing rapidly.




making systems development outsourcing work

As an independent management consultant I often help clients optimise their IT outsource arrangements.  I do this based on my own experiences as a client for many years of systems development providers (onshore, nearshore and offshore), and as someone who also advises outsource firms on how best to meet client expectations for software delivery.  I confess that I am a big fan of the benefits that outsource suppliers can bring (eg access to stable, easily scalable teams with top quality technical resources) while acknowledging that outsourcing (partial or wholesale) brings its own problems.  I have also enjoyed building and leading successful in-house teams in the UK and US, and recognise the benefits of the in-house supplier (easier communications and stronger personal relationships) compared to the outsource option.  My most productive experiences where when strong in-house teams worked well with equally strong outsource teams in the same or different countries. In this short article I want to share some of the common issues that I encounter in my work on nearshore and offshore outsourcing, and some of the approaches that I have used to resolve these problems.  [NB many of these issues can also manifest when engaging with onshore suppliers].

The most common complaint from both suppliers and clients is that the outsourcing arrangement never fully meets original expectations.  Even when the initial project is completed successfully, clients often feel that making the arrangement work required far more effort on their part than they were expecting, and that relationships at the day-to-day working level were not as smooth as hoped (even when the technical quality of the work done was good).  Another common complaint from clients is that there was less value added over and above the simple completion of tasks.  Many clients (including me when I was a client) are hoping for suggestions for design improvements, technical innovation, and enhancements to the overall business systems architecture as a by-product.  For their part, suppliers become frustrated when engagements end when the original project is completed and/or the account never grows above the initial team size.  Here are some of the suggestions I make when I hear these issues:

  1. Be realistic.  Outsourcing rarely works perfectly straight out of the box with a new supplier.  I’ve used the very biggest to the very smallest suppliers, but in the end it is just people learning to work with other people.  All of the training and methodological awareness in the world doesn’t mitigate the need to learn each others styles, preferences and needs.
  2. Domain experience is always context dependent.  It never transfers 100%.  New contracts need to allow time for learning the specifics of the business; the details of an unfamiliar business systems architecture; and, the scope of the particular project.
  3. Take responsibility for making it work whichever side of the equation you are on.  It is not someone else’s job.  Some of the best supplier relationships that I had, had the worst possible starts and were teetering on collapse before issues were resolved.  The grass is not greener elsewhere, so get on with flushing out all of the issues in a safe environment with no blame or recriminations.  Whatever issues you have in this supplier relationship will manifest in all supplier relationships if you do not tackle them head on and learn something in the process.
  4. Suppliers cannot complain that the account did not grow or continue post initial project if they made no attempt to share their ideas for broadening their scope while they were delivering the first project.  The big consultancies are good at this – many suppliers could learn a lot from them.  I once found out by accident that my supplier felt that other aspects of our technical architecture were a mess, but they felt that it would be rude and inappropriate to tell me or my colleagues (even though that is exactly what we wanted).  My view is that suppliers should treat every engagement as the start of a long relationship, and act accordingly (even if it means taking risks by delivering bad news).
  5. Become expert at operating agile approaches to software development in large-scale geographically distributed environments.  Hire people with that expertise.  Don’t claim you can do it if you can’t, as that just leads to the worst aspects of waterfall methods being married with the worst aspects of agile methods (I see a lot of that).
  6. Communicate, communicate, communicate.  If you operate daily stand-ups via Skype etc, then make sure that everyone attends.  Do not hide team members behind a team-lead or the person with the best English.  Transparency is everything in outsourcing.
  7. Be sensitive to cultural differences, but avoid stereotyping or operating on false assumptions.  Some cultures are generally more comfortable with agile approaches, while others are generally more comfortable with well-planned structured sequential methods; but I have found significant exceptions to the rule.  Distance from your own country does not necessarily increase the culture gap.  Companies have cultures too – not just countries.
  8. Beware the dog and pony show.  The salesman in your office does not necessarily reflect the culture back at the ranch.  Always visit the offices of your potential outsource provider, and spend time with engineers – not just team leaders and account managers.  Regardless of technical expertise, not all companies will work well with your company.  Find one that feels like a good fit.
  9. Do not try to bankrupt your supplier.  Outsource suppliers work on slim margins as they believe (mistakenly in my opinion) that they are competing on price.  Of course outsourcing provides excellent value for money – but it provides a lot more than that as well.  Paying a fair rate reduces your risk as a client.  Accept that nearshore suppliers cannot match offshore rates, but they more than make up for the price difference in other areas.
  10. Do not underestimate the impact of time differences.  Structure your projects in a way that works with the time difference, rather than fighting the time difference.

I hope this reflection on personal experiences is helpful to someone somewhere!

Cliff Moyce

September 2013