Category Archives: Uncategorized

Don’t smash the looms: five reasons why artificial intelligence is nothing to fear

[first published on November 7th 2017 at

Predictions have been rife this year about the threat to jobs from Artificial Intelligence. We are warned that AI will learn how to do our jobs, thus rendering us superfluous. Should we be worried, or do fears about AI demonstrate a failure to learn from history?

The history of technological impacts shows us that fears of job losses from the automation of weaving in the 18th and 19th centuries were unfounded. The Luddites famously smashed automated looms to protest that their craft-based skills were being made redundant and that unemployment and hardship would result. The Luddites believed that technological advancement generates inevitable structural unemployment and is consequently injurious to the macro-economy. The counter-argument is that if a technological innovation results in a reduction of necessary labour inputs in a given sector, then the industry-wide cost of production falls. This, in turn, lowers the competitive price and increases the equilibrium supply point which, theoretically, will require an increase in aggregate labour inputs (Jerome, 1934). Ie the product will become cheaper and more widely available and new demand will be created. This in fact is what happened with woven goods once production was automated. The sale of rugs and carpets and a myriad of other woven products expanded enormously and a cottage industry that employed a few thousand craft people weaving by hand was joined by a huge industry employing hundreds of thousands of people in producing, storing, transporting, and selling similar goods produced by machines. Though the immediate fears of the Luddites were understandable, so incorrect were their predictions that economists coined the phrase ‘Luddite Phallacy’ to belittle any further claims that new technology would result in net job losses. And note that the craft-based industry of the Luddites was not replaced by the automated industry. Their work continued to this day where it is treated as the luxury product that it is.

By now, we 21st century people should be confident that any new technology will not create mass unemployment but will instead create jobs and boost economies. And yet, our fears remain. Fears that are stoked by the media (whose interest is not in the welfare of workers but in their ability to sell newspapers, subscriptions and advertising in print, online, and on TV.).

Five reasons we shouldn’t fear AI:

1. AI does not do the work that people do; it does the work that people cannot do

One mistake that people make when debating AI is to assume that it does work currently being done by humans. By and large, it does not. Instead, it does work that people cannot do at all, or cannot do easily, or cannot do sufficiently well in a reasonable timescale. Or it does work that is being done by machines already, but it does it much better than existing machines.

Previously, people have used digital calculators, spreadsheets and computer modeling techniques to do many of the things that they could now (or in the future) use AI to do faster and better. Those same people can now use AI techniques such as pattern recognition to meta-analyse Big Data from an infinite number of sources. An example of a machine driven process being replaced by a better (AI) machine is robo-advisory investment management services for retail customers. Current algorithm driven methods via online services have a reputation for being clunky and simplistic. AI transforms this service with a level of sophistication that exceeds enormously what algorithms can do. Result: no human replaced, but many happy humans as result.

AI is an additive technology that opens up a whole new world of possibility to government, science, medicine, technology, logistics, education, and commerce. Through AI techniques of natural language processing, machine learning, deep learning, and cognitive computing, people and organisations can better automate processes, gain non-intuitive insights into data, and manufacture ‘better things better’. Non-intuitive insights from data can generate and validate new economic, business and investment strategies. In capital and commodity markets, the more efficient use of capital afforded by using AI tools can provide huge stimuli to economies through increased capital for investment.

2. AI does not destroy jobs; it creates a huge number of jobs

Rather than causing unemployment, factories created millions of jobs in the 18th and 19th centuries. AI unleashes human potential to do more, bigger, faster and better. It allows us the ability to do the things we always wanted to do, plus a lot more things we haven’t yet considered. That is how jobs are created. Already, AI has created many more jobs than it has ever replaced. Constellation Research predicts that the market for AI will be worth $100 billion by 2020. Many of the jobs being created by AI are jobs that (could) never existed before.

3. AI creates jobs not just in its own development, but in every industry that uses it

As well as ‘pure AI’ roles there are many more jobs available in industries that are using AI to do new things, or do old things better — and in the process creating increased demand and increased job numbers. One example is cyber-security. Cyber security uses a wide range of AI approaches and techniques to keep our data, identities and money safe. These include machine learning, pattern recognition, and fuzzy logic. And yet there is a huge skill gap such that firms struggle to fill open positions. The ISACA (a non-profit information security advocacy group), predicts there will be a global shortage of two million cyber security professionals by 2019. In financial services and capital markets, AI is the science behind anti-money laundering processes and technologies as well as many other forms of risk management including ‘Regtech’, the AI based technology used to assure regulatory compliance by making sense of multiple — often conflicting or incomplete — data sources.

4. AI will not kill us. AI will save us

Rather than worrying about something that will never happen (eg autonomous robots wiping us from the face of the earth) we should focus on how many lives are being saved right now by the use of AI in medicine and surgery. Or we should think about how many hungry mouths are being fed more cheaply by improved agriculture coming from AI techniques and technologies. Or how AI is protecting your online identity and the data in our banks.

5. We will never be ready

Were we ready for the internet — part of the third industrial revolution — and everything (good and bad) that it brought us? Of course not, because we couldn’t predict the new business models that would be facilitated by such a technology, having never seen its like.

Final thoughts: The irony of change management in many organisations is that it ensures that real change never happens, because real change cannot be ‘managed’. Real change is almost always a reaction to significant change in the environment — including opportunities and threats created by new business models enabled by new technologies. Who among us can really predict everything that a large number of new technologies arriving at once could generate? Technologies such as nanotechnology; atomically precise engineering; conscious technology; a hyper-connected (and thus arguably conscious) internet of humanity; mixed reality living; synthetic biology; human augmentation; brain uploading; internet of everything; and, AI, to name but a few. None of us can come close to fully imagining all the new business, social and environmental models and opportunities that will be created by these technologies, but we can be sure of one thing: they will create huge numbers of jobs and businesses world-wide. They always do. We should be grateful for that. Please don’t smash the looms — but more importantly, please don’t be scared of them.

Cliff Moyce

Originally published at

Avoiding pitfalls when implementing machine learning and blockchain in insurance

[Feb, 2017)


Tools for implementing Machine Learning and distributed ledger technologies (‘Blockchain’) can now be used relatively easily and cheaply in insurance processes. Used well, they present a great opportunity to improve productivity and security. However, organisation need to ensure that implementations and subsequent operations are as painless and risk-free as possible. This article aims to provide tips on avoiding common pitfalls.

Machine Learning:

Machine Learning (ML) describes software that adapts / changes / ‘learns’ when exposed to new information. Ie explicit programming by a human is not required as the software is self-adapting. A good example of ML in action in our everyday lives is spam-detection software that improves its own performance through experience (even a single email can result in learning that can be generalised across all users of the software worldwide). In the insurance industry, one fundamental process that benefits greatly from using ML is underwriting (the process by which the institution decides whether to take on the risk offered by a customer or broker, and at what price). ML algorithms can be trained on millions of pieces of customer data, actuarial information, and policy outcomes to suggest or make the best business decisions within parameters set by the firm.  As a further benefit, the analyses done by ML software can unearth underlying (sometimes non-intuitive) trends as new information continues to be captured during live processing. Such trend analyses can result in better underwriting decisions and more competitive pricing (a benefit to customers). Other uses for ML in insurance include:

  • combining ML with telematics in vehicles to improve road safety. Eg by using information on the road ahead to guide drivers, analyse driving patterns, and make safety suggestions.
  • spotting account behaviours correlated with default and customer churn.
  • increasing the speed and quality of processing (through digitalisation and information capture) from paper forms.
  • improving fraud detection and prevention at the underwriting and claims-handling stages. Eg analysing handwriting (using ML) on digitalised paper documents can result in similar content being spotted across different jurisdictions, even if held in different formats (eg death certificates in different states in the U.S.).
  • improving the speed and quality of insurance policy reviews by ingesting a policy, breaking it down into clauses and logical blocks, analysing and comparing etc. (when done manually this process is labour-intensive, error-prone and slow.)

Tips for adopting ML in insurance:

  • start small – but do start. Do proof-of-concepts (POCs). Do not attempt to plan a large-scale transformation without ever having done a small implementation.
  • early success will depend on having enough valid data to train the software. Make sure you can get those data.
  • partnerships are important. Most insurers do not have enough expertise internally (as at February 2017) and will benefit from third-party assistance.
  • diversify. Avoid betting the farm on a single ML approach or technology. Try different approaches in different POCs.
  • Use Cloud-based, SaaS, on-demand solutions from partners, plus open-source tools.  This will allow you to experiment at lowest possible cost (rent don’t buy).


Blockchain is a secure record of transactions collected into blocks grouped in chronological order and distributed over different servers to provide reliable provenance. The technology uses digital signatures and a consensus mechanism that ensures participants can agree on which transactions are valid. Recent experience with clients suggests that Blockchain is going to be an important part of the insurance technology (InsurTech) revolution. Blockchain benefits will include improved underwriting accuracy, reduced administrative costs, and improved success in preventing claims fraud. In the research paper ‘ChainReaction: How Blockchain Technology Might Transform Wholesale Insurance’, Michael Mainelli identified the three most viable use cases as: (1) placement and contract lifecycle; (2) KYC/AML (know your client / anti-money laundering); (3) claims management. A further benefit could come from (4) improved fraud detection.

  1. Placement and Contract Lifecycle. The placement process is often heavily paper based. Each participant must ensure that there are no mistakes by checking the documents. This often results in rework and wasted effort/money when mistakes and discrepancies (between the information held by different participants) are found. If contracts are stored on the Blockchain as an immutable ledger they are certain to be consistent for all users, thus removing the need for participants to repeat the checking process.
  2. KYC/AML. Distributed ledger technology can reduce the cost and time of these expensive, laborious and (very) slow processes, for example by eliminating the need for third-parties to produce reports. The Blockchain would create a digital identity system that would allow broker and insurer to manage their documents (credit reports, patient records etc.) without fear of losing control of personal data and other sensitive information.
  3. Claims Management. It is highly feasible to design a Blockchain ledger of all documents created in the claim process and make them available for interested underwriters and other stakeholders. This would make the process transparent and reduce cost, delay, and reputational risk.
  4. Fraud detection. reports that “The total cost of insurance fraud (non-health insurance) is estimated to be more than $40 billion per year.” Increased effectiveness in detecting fraud (eg falsified injury or damage reports) can be achieved by automatically validating and confirming ownership as well as authenticity of documents and location changes.

Tips for implementing Blockchain in insurance:

  • start small. Find a small problem and use Blockchain to improve the process running behind it (make it faster / less costly / less prone to error)
  • use private Blockchains such as and Hyperledger for improved peace of mind (public Blockchains are still young and subject to security exploits and attacks).
  • a Blockchain is a distributed network of trust, so consider looking for partners that share the same goals. Doing so could unlock new gateways to internal transformations and shed light on new business opportunities.
  • Blockchain experts are rare and hard to employ, but it is crucial that you have a knowledgeable expert in your team. Hiring an experienced expert – even as a temporary or part-time consultant – should always be worthwhile.


Whether adopting ML or Blockchain or both the following approach should work well: brainstorm with representatives of lines-of-business and business functions to identify processes that are problematic and could benefit from either technology; create a list of POCs to execute over the next 9 to 18 months; use partners with real expertise and verifiable experience; be prepared to fail early on some POCs while finding gems in others; keep experimenting (coming up with ideas and using different technologies); and, gradually build robust business cases for change from those proofs of concept.

Good luck with your endeavours!

Cliff Moyce, February 2017

[this article was first published in Insurance Innovation Reporter on February 7th 2017 at

Dodd-Frank – an unintended assault on business

[Feb, 2017]

The Dodd–Frank Wall Street Reform and Consumer Protection Act, 2010 (‘Dodd–Frank’) is a United States Federal Law that was intended to reduce levels of systemic risk in the banking sector, such that we would never again see highly leveraged balance sheet positions causing the failure of banks, and those failures causing contagion in other institutions and whole economies (as happened in 2008). 

Unfortunately, Dodd-Frank became an unintended assault on business because the capital adequacy provisions in the Act caused banks to stop lending (as loans weakened the strength of balance sheets). Loans were called in or withheld; overdrafts, materials financing and factoring agreements were not renewed; and, as a result, rates of business failures exploded.  In some regions, banks refusing to fund the growth (i.e. not the decline) of small and medium sized enterprises has become the biggest single cause of business failure. The Act and much of the other related regulation since the financial crash also caused banks to divert most of their discretionary project budgets towards regulatory compliance initiatives – including meeting the needs of more demanding regulatory reporting. 

Regardless of your personal and professional view on the value of increased regulatory reporting (Is anyone doing anything with the data?  What difference has it made? Has systemic risk actually reduced?), diverting funding away from projects that could improve products and services to customers should be a concern for everyone. It is in no-one’s interests for our banks to become moribund. 

Another highly ironic unintended consequence of the whole explosion in regulation in recent years (Dodd-Frank, Basel X, EMIR, MiFid etc) has been that it punishes small banks and financial institutions more than the big guys.  Ie the institutions that represent the biggest source of systemic risk (big banks) are impacted less negatively than those that present almost no systemic risk. This is because the smaller institutions lack the resources (people, money, skills) to do the wholesale legacy system upgrades, system rationalisations, and new systems developments needed to meet regulatory reporting and risk management requirements.

Review and reform of Dodd Frank is very much required, otherwise our western banking system, business environments and economies will be stuck in a nose dive that could become a death spiral. 

We should welcome a review of Dodd Frank and the Volcker Rule (the rule within the Act intended to stop speculative trading and investments by banks), and all other recent financial regulation.  We will not see proprietary trading in banks returning to the extent that it puts the whole institution at risk (when it does come back – and it will come back – there will be severe ring-fencing of assets at risk) but we should all want to see our businesses better financed with a wide range of financial products.

Cliff Moyce

NB this article was first published on 6 Feb 2017 in Financial IT

How Blockchain helps achieve more efficient regulatory compliance

[Dec 2016]

Could the speed, security, and immutability of blockchain help financial institutions achieve regulatory compliance in a more efficient manner?  There are good reasons to think so.

Blockchain technology has the potential to transform many business processes, making the data used in those processes more available, transparent, immediate and secure. It can also strip out large amounts of cost, delay, error handling, and rework.  Possible uses include trade reporting; clearing, confirmation, validation and settlement; record keeping; monitoring and surveillance; risk management; audit; management accounting; financial accounting; and, regulatory compliance including financial crime prevention. The immutability, immediacy and transparency of information captured within a blockchain means that all necessary data can be recorded in shared ledgers and made available in near real-time. In such a world, stakeholders will no longer be simple recipients of post-hoc reports; instead, they can be part of the real-time process.

By necessity, blockchain technology is complicated but the underlying idea is simple: It is a distributed ledger or database running simultaneously on many (possibly millions) of nodes that can be distributed geographically and across many organisations or individuals.  What makes blockchain unique is its cryptographically assured immutability, or irreversibility. When transactions on the ledger are grouped into blocks and written to the database they are accompanied by cryptographic verification, making it nearly impossible to alter fraudulently the state of the ledger.

Blockchain’s immutability lends itself to proof-of-process for compliance; eg keeping track of the steps required by regulation.  Recording actions and their outputs immutably in a blockchain would create an audit trail for regulators to verify compliance. Perhaps more importantly, regulators could have read-only and near real-time access into the private blockchain of financial organisations.  This would allow them to play a more proactive role and analyse information in real time, and even issue alerts and warnings automatically.  Such a change could reduce dramatically the time, effort and cost that financial institutions spend on regulatory reporting, as well as improve the quality, accuracy and confidence in the process.

A further possible extension is blockchain being used as a digital identity management grid, with all of the information required for screening and compliance being held about individuals and/or firms in a chain.  This would reduce KYC/AML (know your client / anti-money laundering) processes to simple automated checks of a blockchain-powered, market-wide utility.  

It is likely that sharing sensitive information about customers between financial organizations will start to become the norm, once trust is established in a blockchain-enabled ecosystem.  For example, SWIFT has announced that their own KYC registry — which already includes more when 1,000 member banks — will be shared with trusted partners and customers in future.  This is one of the early steps to achieving fully trusted digital identities in the industry – a so-far unachieved target for the industry.

So what are the barriers to blockchain being adopted?  Mainly they are privacy and performance issues. Using blockchain for trade reconciliation, settlement and the like would require sophisticated privacy controls and the management of access to the information residing in the blockchain.  Out of the box, private (permissioned) blockchains can provide two types of access control: read-only and read/write. Additionally, it is possible to introduce permissions to mine, receive or issue assets.  Real-world applications in capital markets and other sectors, however, require more flexible and granular access management schemas: Simply putting complete information about all transactions on a shared ledger open to anyone on the network is something no market participants could accept.  Speed is also a possible issue for the immediate adoption of blockchain in high-frequency processes. Performance with Blockchain-enabled databases is significantly slower than conventional databases due to the cryptographic component, which is very calculation intensive. But speed issues are always solved in the tech world, so this may just be a waiting game in which we migrate processes to blockchain once the technology catches up.

In summary, Blockchain technology has the potential to revolutionise many business processes in financial services and capital markets, as well as regulatory processes, bringing great benefit to the regulatory compliance profession.

Cliff Moyce, December 2016

This article was first published on the blog of the Financial Crime Prevention Association at

Cloud as a solution to legacy system problems (part 2): six steps to achieve Cloud migration

[January 9th 2017]

Cloud computing has a significant role to play in financial services as a strategic solution for resolving the problems of legacy systems. Here’s how financial institutions should approach Cloud migration.

In my previous article (The new IT experience: Cloud as a solution to legacy system problems in financial services), I argued that Cloud computing not only make sense in its own right, but it has an extra and significant role to play in financial services as a strategic solution for resolving the problems of legacy systems. The argument is that the process of migrating to Cloud will facilitate the large-scale rationalisation and decommissioning of (largely home-grown) problem systems that bedevil financial institutions. The previous article explained the benefits that can accrue from migrating to Cloud, while this article provides more information on how to undertake such a migration. As stated above, the main benefit of Cloud migration for financial institutions is the opportunity to rationalise systems, applications, databases, data sources, etc., and thus eliminate large amounts of the redundancy usually seen in such environments. By doing so, IT-based business operations will become easier and cheaper to deliver and support, as well as being more accurate, timely and secure. Data management will become easier and more reliable, while data quality can be assured in ways that are impossible with the current myriad of overlapping/duplicating (but never agreeing) data inputs and outputs. Not only do fewer systems and more accurate data make it easier and faster to make changes and fix problems, but Cloud computing is designed to facilitate truly agile development practices, with development and test servers being spun up quickly on demand (a process that can take months in other infrastructure models). Improved data management alone can be the difference between achieving and not achieving regulatory compliance. My recommended steps for planning and effecting Cloud migration are: assess current systems; design to avoid vendor lock-in; manage processes and culture; create a business case for change; avoid over-planning; and drop applications that are not Cloud-enabled.

  1. Assess current systems. Cloud migration should start with an audit of the entire systems infrastructure and a reassessment of applications and databases. An audit is a valuable opportunity to rethink the value and relevance of existing applications and decide which ones are worth modernising and reconfiguring; which ones should be replaced by new applications; and which are no longer relevant and should be retired. A major factor in rationalisation decisions is redundancy – the target should be to have only one software module for each required function (build once, use many), and for there to be only one ‘system’ providing one business service (internally or externally). This compares to the current model that will often see dozens (perhaps hundreds) of systems doing the same thing for different business segments (eg how many reporting systems or risk systems do you have in your organisation?). One factor on which to base rationalisation decisions is how easy (or not) an application can be enabled to work in the Cloud while providing the major benefits of the Cloud (scalability, availability, security, etc). Cloud enablement should not be taken for granted. Some applications will convert easily to being Cloud-enabled; while for others it will be more difficult and the business case for doing so will be weak or non-existent.
  2. Design to reduce vendor lock-in. Vendor lock-in is not in and of itself a bad thing (the vendors are providing great services), but it needs to be an active decision and not something into which you stumble unthinkingly. Avoiding lock-in and thus retaining the possibility of moving relatively easy from one major Cloud provider to another may make sense from a strategic perspective, but it can also reduce the benefits that can be realised from Cloud. There is certainly a case for going ‘all-in’ with a vendor and this maximising the benefits. For example, event-driven AWS Lambda compute functions are not portable outside AWS, but they do deliver significant benefits within AWS. By comparison, using universal instead of proprietary technologies will afford you more flexibility in future. Eg implementing your stack on an open-source PaaS (platform as a service) such as Mesosphere will make your architecture more ‘portable’, consuming only the infrastructure from the Cloud provider.
  3. Manage processes and culture. Technology develops faster than culture and processes. Opportunities offered by Cloud computing technology can give organisations incredible benefits, but only to the extent that processes and culture are supportive of the new practices. Superfast speed of infrastructure provision in Cloud can still run up against a wall of bureaucratic processes if you let it. Changing mind-sets is a huge determinant of Cloud success, and is something that is too often underrated or even overlooked.
  4. Have a business case. Don’t make migration an end in itself. Never do anything without a clear and compelling business case. Benefits you should be looking to validate and quantify include reduced capital expenditure; reduced operating costs; resources freed up to work on higher-value activities; increased development velocity; faster deployments to production; improved ease of support; reduced operational failure rates; improved data management and quality; increased transparency (monitoring, surveillance, reporting); easier to achieve regulatory compliance, etc.
  5. Don’t over-plan a long-term migration strategy. We all know that no plan survives first contact with the enemy. Instead, set goals and start moving toward them in a pragmatic and agile fashion. Remember that the environment will be changing around you (and the world of financial services has changed enormously in the past nine years), so what is important today might be less important tomorrow. Waterfall planned approaches simply store up risk for later on. Better to fail early with fewer consequences than go for big bang approaches to infrastructure change.
  6. Don’t migrate applications that are not fully Cloud-enabled. This is a big one as to do so merely replaces old problems with new ones. Continued availability of applications – one of the key reasons for migrating to Cloud – is an essential function of their Cloud readiness, and you won’t get this if the application is not enabled for Cloud. Simply having something sitting on Cloud infrastructure is not the same thing as it being Cloud-enabled.

Conclusion: Theories of evolution argue for two factors: random mutation and natural selection. The survival of organisations is also subject to these factors, except that they have to choose/engineer their own mutations while selection is done by clients and other stakeholders. In my opinion, Cloud is one of those mutations that in future will be demanded by stakeholders, from clients and shareholders to partners and regulators. As data protection, cyber security and cost-effectiveness become bigger and bigger factors in the environment, Cloud will move from being an option to being a business necessity. Knowing how to migrate to Cloud will become a differentiating factor in continued and future business success.

Cliff Moyce, January 2017.

This article was published first at Tabb Forum 

Cloud as a solution to legacy system problems in financial services

[November, 2016]

Back in the days before centralised water and electricity, people had to dig their own wells and procure their own generators. As centralised infrastructure became available, it made sense to connect to the grid. Similarly, Cloud computing has now become that grid for business. It makes sense in its own right, but it has an extra and significant role to play in financial services and capital markets – that of strategic solution for resolving legacy systems problems.

The problems of legacy IT infrastructures in financial services and capital markets are many and manifest. On-premises, home-grown, self-managed infrastructures fail any modern objective measures of value for money, time and quality. They are expensive to operate; inflexible; opaque; hard and slow to support, enhance and test; insecure; difficult to scale; and, contain high levels of redundancy and obsolescence. When times were good, business divisions effectively (or actually) had their own IT divisions building their systems. Though it was incredibly inefficient from a corporate perspective (the corporation was often building the same systems over and over again from scratch) this “shadow IT” model allowed business units to respond quickly to opportunities and client needs (at least in theory). Since this time, budget constraints have forced financial institutions to integrate previously independent systems into something that can be operated, distributed and secured centrally. This has created a whole new set of problems. To try to fix or replace everything in the legacy systems infrastructure – and there have been many articles exhorting the industry to do just that (eg “by starting again with a clean slate”) – is to oversimplify the problem and to over-invest in the solution. The project is simply too big, expensive and risk-laden to contemplate. Further, it will not solve all the problems of building and running your own infrastructure, as it could simply replicate the model with ‘new legacy’. Yet, there is a strong need to align IT services to modern business operations, as many institutions are now facing the need to replace obsolete hardware in an IT estate that has been starved of money since the financial crash 0f 2008. Setting a strategic target of migrating infrastructure to the Cloud will force a large degree of rationalisation that might not otherwise be contemplated, thus reducing the problems of redundancy and support. It will also force adoption of an infrastructure that exemplifies best practice in all measures (flexibility, scalability, performance, security, sustainability, etc.) and will allow financial institutions to move onto more modern hardware, firmware, middleware, operating systems, databases and applications with little or no capital investment. Benefits of Cloud include:

  • Security. Although people tend to attribute security to physical possession, it is a common misconception and can be compared to holding money under a mattress as opposed to in a bank. In reality, Cloud is much more secure than any other option for infrastructure management and service delivery. The average on-premises infrastructure is penetrated multiple times a day, whereas the average big name Cloud provider may only have been penetrated a handful of times in its existence (they may claim it has never happened). Also, you cannot be as secure as you should be if any version of your firmware, middleware, operating system, databases, anti-virus, firewall, application software, etc., is not fully up to date (one high-profile, successful cyber-crime intrusion and financial theft in banking was enabled in part by a small group of servers having been overlooked for security software upgrades). Vendors and the open source community work hard to plug quickly any vulnerabilities uncovered in their products and services; but many of their customers are slow to implement essential upgrades (the worst that I have seen is a 10-year delay, but there are bound to be worse examples). The chances of an on-premises installation ticking all of the current version boxes is close to zero. Financial institutions just do not have the resources for monitoring, planning, and implementing new versions. The chances of a top Cloud vendor ticking the boxes is far higher – it is what they do for a living.
  • Cost. A big issue for enterprises is the high cost of running and maintaining IT infrastructure. A bank can spend up to 50% of its budget running IT-based business operations. Cloud computing offers near real-time, on-demand, subscription-based provisioning of almost infinite compute, storage and network resources, with the ability to scale up and down automatically, intelligently and in a matter of seconds. This provides the opportunity for huge increases in efficiency and productivity.
  • Availability. The reality of running on-premises data centres is that availability and recoverability is never guaranteed. Provided that applications are Cloud-enabled, continuous availability and disaster recovery are a function of the design and infinite horizontal scalability of the Cloud.
  • Speed of deployment. Applications designed for the Cloud and using Cloud services take significantly less time to build and deploy and are cheaper to run.

The benefits of Cloud listed above allow organisations to advance toward a more flexible, agile and data-driven model in several ways:

  • empowering agile approaches to development. By enabling self-service infrastructure acquisition, provisioning and deployment as well as elasticity and scalability, Cloud computing encourages innovation and experimentation, and speeds up continuous integration and delivery, empowering agile approaches to product, service or software development.
  • enabling data management and business intelligence. One of the bases of the digital economy – the availability of data and the ability to process data – is enabled and reinforced in the Cloud, which offers access to tools and compute power to process and consolidate Big Data and prepare them for specific tasks. Migrating data sources and data pipelines to the Cloud gives the technology team sufficient data and infrastructure elasticity to run predictive analytics and gather data-driven business intelligence.
  • improving the speed and quality of decision making.  Cloud plays a vital role in enabling faster and more informed decision making by providing broad and immediate access to data irrespective of their location, thereby reducing interdependencies between ‘information holders’. A strategy of migrating legacy infrastructure to the Cloud should have ubiquitous, transparent access to operational, process and customer data as an important objective. Current forced integrations of multiple systems (many doing a version of the same thing in an inconsistent manner) commonly create information silos.
  • freeing up resources and improving flexibility. Cloud can make an organisation ‘lighter’ and more flexible, as it allows a move from systems to services. This is especially relevant in modern heterogeneous computing environments with multi-tiered applications requiring a broad mix of technologies. Creating a ‘composable enterprise’ of software modules can become a reality if made an objective of migrating to Cloud. Resources that were previously invested in running and maintaining outdated technology can be re-directed to innovation in serving customers.

The above points are illustrated well by the global insurance group Ageas. The company adopted a Cloud-based enterprise platform that integrates the full range of back-end and front-end processes, from policy administration to claims and from finance to HR. As a result, the company’s processes are streamlined, operations agile and enterprise analytics easily accessed.


The financial services industry has available to it a strategic solution for large enterprise legacy IT architectures that if implemented correctly can free institutions from the limitations and implications of outdated technology. Cloud computing is much, much more than simply outsourcing the operation of your IT systems. It is a paradigm shift in how we think about and experience IT in our organisations. The benefits will be seen by customers, business users, IT developers, financial officers, operational executives, shareholders, regulators and many other stakeholders. Cloud is the future. Never forget: Every cloud has a silver lining!

Cliff Moyce, November 2016

This article was published first on Tabb Forum (

Cognitive analytics gives business the edge

Monday, 10 October 2016

The cognitive analytics revolution in business is underway. It is underpinned by artificial intelligence, cognitive computing and machine learning. Cognitive analytics will give business executives such as the CEO, CFO, CIO and CMO massively enhanced data- driven decision-making abilities, as well as the ability to track and learn from prior decisions. The change means that decisions can be informed by non-intuitive insights on products, services, business operations and markets (including client behaviours) drawn from a wide variety of sources. Those sources will include unstructured data such as social media posts, images, and academic documents. We have seen already how the ability to do post-hoc analyses of the economic, political and legal decisions of governments and legislatures can generate non-intuitive insights unavailable through traditional methods; now it is time for the boardroom to be doing the same.

It almost goes without saying that the use of the word ‘cognitive’ implies the continued quest in computing to create intelligent business machines that operate as per the human brain, “by reverse engineering the computational function of the brain” (Modha, D.S., 2011). Combining neural models and technologies with huge processing power can take us well beyond what any of us could achieve alone or in teams, even huge teams, with current analysis tools and techniques.

The way that cognitive analytics achieves its magic over and above current data analysis methods is through (1) ability to analyse huge amounts of unstructured data alongside traditional structured data sets; (2) ability of cognitive analytics tools to generate non-intuitive insights from data; and (3) ability for the tools to learn as they work – including how decisions suggested by the tool previously panned out when implemented (post-hoc analyses). Unstructured data that are handled well by cognitive analytics tools include emails, videos, documents, images, social media posts, academic articles etc. Cognitive computing uses natural language processing, probabilistic reasoning, machine learning and other technologies and techniques to analyse content efficiently; analyse context; and, find near real-time insights and answers hidden within massive amounts of information. Cognitive systems can adapt and get smarter over time by learning through their interactions with data and through human decision-making (including decisions suggested by the same cognitive systems). Insights provided through cognitive analytics will focus us more on the questions that we ask. These insights can help break us free from the prisons of wrong assumptions, faulty hypotheses, and the tendency to confuse symptoms with causes.

All areas of business can be supported and enhanced by cognitive analytics. These include business strategy (for example, mergers and acquisitions); product design and marketing; financial planning (from capital planning to cash management to financial control); and business operations (eg the efficient and effective deployment of resources for maximum productivity).

Financial services and capital markets have been using a form of algorithmic artificial intelligence methods for some time. Eg algorithmic trading methods using machine-learning and ‘cognitive’ (ie loosely coupled) logic to make decisions; and, predictive / trends / risk and behavioural analyses using similar methods for financial crime prevention.  Those algorithmic cognitive or quasi-cognitive approaches are also seen in wealth management ‘robo-advisory’ offerings, and will start to be seen more generally in digital banking.  In the finance function we have forecasting systems that use online analytical processing (OLAP). We also see algorithmic predictive analyses in cashflow forecasting and demand planning. What ‘real’ (ie based on neural models) cognitive analytics will give finance and business planning functions is the ability to use many data types that cannot be analysed easily currently; further and better analyses of the huge amounts of data held by the function; and, the ability to derive non-intuitive insights from data that are not being derived currently. This step-change in capability will strengthen the ability of those functions to add value to strategic and operational planning. Eg in financial control, cognitive analytics can (relatively pro-actively) highlight problems, or areas for optimisation. It can also track in real time or monitor retrospectively actual performance against financial plans, and provide feedback that companies can use to fine-tune their planning approaches. In fact, if a toolset is genuinely cognitive it should learn to fine-tune approaches itself. Similarly, the ability of marketing and product development teams to better predict consumer behaviours will reduce the risk of product failure as well as driving innovation that may not have occurred otherwise.

In summary, cognitive analytics is set to transform our ability to plan, develop and run businesses. It is genuinely transformational. Though it is not a panacea for all ills, it will help enormously with diagnosing those ills. Early adopters will be well rewarded.

Cliff Moyce

[first published at on 10 October 2016]

Login to post comments

How Blockchain Can Revolutionize Regulatory Compliance

[August 2016]

Blockchain is currently one of the hottest topics in financial services and capital markets. The technology has the potential to transform many business processes, making the data used in those processes more available, transparent, immediate and secure.  It could also strip out large amounts of cost, delay and error handling/rework.  Possible use cases include trade reporting; clearing, confirmation, validation and settlement; recordkeeping; monitoring and surveillance; risk management; audit; management and financial accounting; and regulatory compliance (including – but by no means limited to – financial crime prevention). The immutability, immediacy and transparency of information captured within a blockchain means that all necessary data can be recorded in shared ledgers and made available in near real time.  In such a world, stakeholders will no longer be simple recipients of post-hoc reports; instead they can be part of the real-time process.

Blockchain first emerged as the technology that powers the cryptocurrency bitcoin.  However, since its first appearance in 2009, blockchain’s potential uses have far exceed cryptocurrency applications.  By necessity, blockchain technology is complicated in its implementation, but the underlying idea is simple: it is a distributed ledger or database running simultaneously on many (possibly millions) of nodes that can be distributed geographically and across many organizations or individuals. What makes blockchain unique is its cryptographically assured immutability, or irreversibility.  For example, when transactions on the ledger are grouped into blocks and written to the database, they are accompanied by cryptographic verification, making it nearly impossible to alter fraudulently the state of the ledger. Another way to think about blockchain is as trust/consensus technology: the changes in the data are recorded into the blockchain when network participants agree that a transaction is legitimate in accordance with shared protocols and rules.

Interest in blockchain in financial services and capital markets continues to grow – and will accelerate as live solutions make their way to market.  Many organizations – including banks, exchanges and fintech firms – have announced initiatives in 2016, while the list of possible use cases being proposed in articles and forums is lengthening.

Applications in Compliance

One of the most exciting features of blockchain from the compliance perspective is its practical immutability: as soon as data is saved into the chain, it cannot be changed or deleted. That is why blockchain is used as the document or proof for the transfer of any digital asset, for example bitcoins or other digital currencies. By the same token, it can be used as record of ownership of physical property – an approach currently undergoing testing by Sweden’s national land survey, where a blockchain-powered system for registering and recording land titles is attempting to digitize real estate processes.  Blockchain’s immutability also lends itself to the application of proof-of-process for compliance.  Blockchain could be used to keep track of the steps required by regulation. Recording actions and their outputs immutably in a blockchain would create an audit trail for regulators to verify compliance.  Almost as importantly, regulators could have read-only, near real-time access into the private blockchain of financial organizations.  This would allow them to play a more proactive role and analyze information in real-time mode.  In other words, this brings them closer to becoming participants in – rather than customers of – the process. Such a change could reduce dramatically the time and effort (and therefore cost) that financial institutions spend on regulatory reporting, as well as improving the quality, accuracy and confidence of and in the process.

Another regulatory field where blockchain could play an important role is in KYC (know your customer) and AML (anti-money laundering). Banks and other financial institutions have to complete many tasks and steps as a part of the onboarding process for new clients. In addition to data collection, there are important rules around validation, confirmation and verification to be completed before new clients can be onboarded.  In some markets, the process can take several months.  Many of the steps could be eliminated if the information existed already in a secure, tamper-resistant database – an immutable blockchain. Any changes to customer data will be distributed to participants in the blockchain immediately. The chain would provide records of procedures and compliance activities for each client.  Blockchain would play the role of proof-of-process, so all that steps are easily traceable and regulators can be confident about the veracity of the information. Moreover, individuals would be co-custodians of the information on the blockchain, which could provide additional protection against identity theft (impacting or even disintermediating businesses like credit-monitoring services).

A further possible extension is blockchain as a digital identity management grid, with all information required for screening and compliance being held about individuals and/or firms in a chain.  This would reduce KYC/AML processes to simple automated checks of a blockchain-powered, marketwide utility.  It is likely that sharing sensitive information about customers between financial organizations will start to become the norm once trust is established in a blockchain-enabled ecosystem.  Interestingly, SWIFT has announced that their own KYC registry, which already includes more than 1,000 member banks, will be shared with trusted partners and customers in the future.  This is one of the early steps to fully trusted digital identities in the industry – which must be the target business and legal outcome.

Smart Contracts

It is hard to explore potential applications of blockchain without mentioning smart contracts. In short, smart contracts are custom, self-executing programs (distributed applications) that run on a blockchain and are triggered by some external data or event that lets them modify some other data; if certain conditions are met, a smart contract can update the blockchain according to predefined rules (e.g., transfer digital assets from one participant to another).  Once this technology gathers enough momentum, its proponents believe smart contracts will be no less revolutionary than the invention of HTML, which transformed the internet and, subsequently, the entire world economy. The appeal of smart contracts is undeniable, as they could potentially replace many functions currently executed by costly or inefficient intermediaries.  However, smart contract technology clearly isn’t ready for prime time yet, as evidenced by the recent much-publicized DAO debacle, where a poorly formulated contract allowed a savvy user of Etherium, a popular public blockchain, to obtain millions of dollars’ worth of digital currency. Smart contracts need to become much more robust to reach the comfort level necessary for widespread adoption by industry.

The smart contracts issue reminds us that with all its promise, blockchain is still quite experimental and not without its challenges with regard to the use cases being discussed in the industry.  Some of the barriers to adoption that come to mind are privacy, performance and infrastructure.  Using blockchain for trade reconciliation, settlement and the like would require sophisticated privacy controls and the management of access to the information residing in the blockchain. Originally, blockchain was designed for precisely the opposite – namely, to enable every network participant to view the entirety of the data.  With Bitcoin, for example, anyone can view the entire ledger if they wanted to. Out of the box, private (permissioned) blockchains can provide two types of access control: read-only and read/write. Additionally, it is possible to introduce permissions to mine, receive or issue assets. However, real-world applications in capital markets and other sectors require more flexible and granular access management schemas; simply putting complete information about all transactions on a shared ledger open to anyone on the network is obviously something no market participants would agree to. In a perfect world, blockchain would allow enterprise companies to map their existing LDAP (Lightweight Directory Access Protocol) users/groups in it. This is a non-trivial problem that remains unsolved at this time, to the best of our knowledge.


Speed is often cited as a big problem for the wider adoption of blockchain. Performance of blockchains is significantly slower than conventional databases, and with good reason: the cryptographic component, which is what gives blockchain its most attractive features, is very calculation-intensive. For example, the throughput capacity of bitcoin is only around seven transactions per second. This does not compare very well, for example, to the average of 2,000 transactions per second processed by the VISA payment system, with the peak capacity of 56,000 transactions per second (although they never actually use more than about a third of this, even during peak shopping periods). There are attempts being made to build blockchains capable of higher performance. Most notably, BitShares claims the ability to handle up to 100,000 transactions per second, which would be plenty fast enough if this were an apples-to-apples comparison.  However, the definitions of performance used by BitShares in their publicized explanations seem different from the accepted norm. These comparisons are further complicated by factors like collocation and the distributed nature of blockchains, but in the grand scheme of things, for now the performance gap remains unbridged.

Setting up and managing the infrastructure to support blockchain solutions is another challenge to organizations experimenting with the technology. As information security, operations, cloud and other teams start introducing blockchain as a new data/code layer in their firms, the process can be quite disruptive, in particular because there are no best practices available that would streamline the roll-out process. There are early attempts to improve the situation, like Microsoft’s Project Bletchley or Hyperledger, but they are not yet finalized for production use.

In summary, blockchain technology has the potential to revolutionize and improve many business processes in financial services and capital markets. Of the many processes that could be improved by the technology, it is regulatory processes such as KYC and financial crime prevention (e.g., AML) that may be early converts.  If this turns out to be the case, the benefits to the industry will be enormous.

Cliff Moyce

This article first appeared in Corporate Compliance Insights, the global premier news site for compliance, ethics, audit and risk:

Service oriented architectures and web services as a solution to legacy IT problems

Cliff Moyce, October, 2015.

When computing became ubiquitous in administrative environments in the late 1980’s and early 1990’s it was welcomed as an opportunity to improve the efficiency and effectiveness of business processing.  Manual or semi-manual business processing at that time was noted for its inefficient hand-offs, checking, and duplicated effort as well as storage problems.  And yet 30 year later we look at extant (‘legacy’) IT systems architectures as representing the biggest barrier to productivity in some types of organisation.  For example, large banks now spend nearly 50% of their operating budgets on IT – and yet it is IT configured in ways that would horrify any student of process design.  Eg multiple systems (sometimes meaning twenty or thirty, not just two or three) doing the same thing; forced ‘integration’ between systems requiring software, middleware and hardware that should never have been required in the first place; inconsistencies between systems meaning reports have to take an aggregate of all outputs rather than relying a golden source, etc.  Attempts to rationalise the architecture by building a single new system to replace multiple old systems often results in yet another system being added to the pile.  Support costs are high as people struggle to manage and resolve the complexity, risk and issues.  What to do about these problems is a long running debate (eg de Souza, 2015; Preimesberger, 2014; Matei, 2012).  One approach that is often espoused is to design and implement a new, more modern architecture using a radical clean slate / blueprint style approach (eg Marchand & Pepper, 2015).  While recognising the temptation to start again, this article asserts that big-bang approaches to legacy IT systems replacement can be naive, expensive and fraught with risk. Instead, pragmatic approaches that can deliver improvements using what exists currently are preferred and recommended.  As well as discussing technologies that can enable such approaches, this article considers the cultural and organisational implications of adopting these methods.

The debate on legacy systems in some organisations is intensifying as expectations for cost efficiency, flexibility, and usability increase.  Legacy architectures are typically described in articles and presentations as unplanned; complex; poorly understood; slow and expensive to operate, support and enhance; old fashioned in their interfaces and reporting capabilities; hiding redundancy; difficult to monitor, control and recover; susceptible to security problems; and, hard to integrate with newer models and technologies such as cloud computing and mobile devices:  “Even minor changes to processes can involve rework in multiple IT systems that were originally designed as application silos” (Serrano, Hernates & Gallardo 2014).  Getting old and new applications, systems and data sources to work seamlessly can be difficult, verging on impossible.  This lack of agility means that legacy systems in their existing configuration can be barriers to improved customer service, satisfaction and retention.  In regulated sectors they can also be a barrier to achieving statutory compliance.  Pressure to replace these systems can be intensified by new competitors who are able to deploy more modern technologies from day one.

Explanations for problems associated with legacy architectures include excessive complexity arising from a post-hoc need to integrate systems that were originally designed to be autonomous; poor knowledge of systems due to lack of documentation and loss of original development teams; individual applications growing ‘like Topsy’ as new functions and modules are bolted on to meet customer demand; use of technologies, models and paradigms that are now outdated; duplication arising from multiple systems doing the same thing, etc.  ‘Local initiatives’ are sometimes argued to be partly to blame for the situation (eg Marchand & Pepper, 2015) as business lines or functions commission their own system builds or buy package implementations, perhaps with little regard to integration and support issues.  Many of these explanations for the problem could be summarised as ‘customer requirements taking precedence over architectural integrity’, but many people (especially the customers) would prefer that to the converse.  Amusing analogies such as the possible negative consequences of living in an unplanned house that has been extended many times are sometimes used to encourage audiences to take a complete re-design approach to solving the problem (Marchand & Pepper, 2015).  By such an approach it is argued that customer service can be improved and complexity, duplication and risk reduced.  These are all highly laudable and valid aims, but how easy is it to design and implement a new IT architecture in a large mature organisation with an extensive IT systems estate?  Eg in a large bank with huge real-time transaction processing demands that has grown organically, and also grown by acquisition?   Rather than the unplanned house analogy, a better analogy might be a ship at sea involved in a battle.  Imagine if you were the captain of such a ship and someone came onto the bridge to suggest that everyone stop taking action to evade the enemy and instead draw up a new design for the ship that would make evasion easier once implemented.  You might be forced to be uncharacteristically impolite for a moment before getting back to the job at hand. 

At some point, many large organisations have attempted the enterprise-wide re-design approach to resolving their legacy systems problems.  Many such initiatives are abandoned when the scale of the challenge or the impossibility of delivering against a moving target become clear.  Time has a nasty habit of refusing to stand still while you draw up your new blueprint.   Re-designing an entire architecture is not a trivial undertaking, and building / buying and implementing replacement systems will take a long time.  Long before a new architecture could ever be implemented the organisation will have launched new products and services; changed existing business processes; experienced changes to regulations; witnessed the birth of a disruptive technology; encountered new competitors; exited a particular business sector and entered others.  All of these things conspire to make your redesign invalid before it is live.  If you are lucky, you realise the futility of the approach before too much money has been spent.  Furthermore, the sort of major projects required to achieve the transformation are the sorts of projects that run notoriously high failure rates: “In just a twelve month period 49% of organizations had suffered a recent project failure” (KPMG, 2005); “Only 40% of projects met schedule, budget and quality goals” (IBM, 2008); “17% of large IT projects go so badly as to threaten the very existence of the company” (McKinsey and Company, 2012).

So if wholesale blueprinting and re-engineering is impractical, what can be done to solve the problems of legacy architectures?  The first thing to say is that trying to fix all of the problems at the same time is a logistical impossibility in anything but the smallest companies, and bears a high risk.  Many organisation would not have the resources to accommodate the large spike in project effort.  Problems always need to be tackled in priority order as there is rarely a silver bullet for the whole job.  Luckily there are some practical and cost effective approaches that can mitigate many of the problems with legacy systems while obviating the need to replace any of the systems.  Two of these approaches are service oriented architecture (SOA) and web services (Cabrera, Curt, & Box, 2004; Li, Huan, Yen & Chang, 2007; Mahmoud, 2005; Serrano et al, 2014). Used in combination, they offer an effective solution to legacy systems problem.  

SOA refers to an architectural pattern in which application components talk to each other via interfaces.  Rather than replacing multiple legacy systems it provides a messaging layer between components that allows them to co-operate to a level you would expect if everything had been designed at the same time and was running on much newer technologies.  These components not only include applications and databases, but can also be the different layers of applications.  Eg multiple presentation layers talk to SOA and SOA talks to multiple business logic layers – and thus an individual prevention layer that previously could not talk easily (if at all) to the business logic layer of another application can now do so.   

Web services aims to deliver everything over web protocols so that every service can talk to every other service using various types of web communications (WSDL, XML, SOAP etc).  Rather than relying on proprietary API’s to allow architectural components to communicate, SOA achieved through web services provides a truly open interoperable environment for co-operation between components. 

The improvements that can be achieved in an existing legacy systems architecture using SOA though webs services can be immense, and there is no need for major high risk replacement projects and significant re-engineering.  Instead organisations can focus on improving cost efficiency by removing duplication and redundancy though a process of continuous improvement, knowing that their major operations and support issues have been addressed by SOA and web services.  Another benefit is that the operations of the organisation can start to be viewed as a collection of components that can be configured quickly to provide new services even though the components were not built with the new service in mind.  This is the principle of the ‘composable enterprise’ (Murray, 2013).

But addressing the issue of legacy systems in a way that makes good sense is not just an IT issue, it is also a people issue.  It requires people to resist their natural inclination to get rid of old things and build new things in the mistaken assumption that new is always better than old.  It requires people to resist the temptation to launch ‘big deal projects’, for all of the reasons that people launch big deal projects – from genuine belief that they are required (or the only way), to it being a way of self-promotion (and everything in-between).  It requires people to take a genuinely objective view of the business case for change, while operating in a subjective environment.  It requires people to prioritise customer service over the compulsion to tidy up internally.  And, it requires the default method of change to be continuous improvement rather than step change projects – which can be counter intuitive in cultures where many employees have the words ‘project’ or ‘programme’ in their job titles.  But this is all easier said than done when you are dealing with people in a real life organisation where certain skills and behaviours have been valued highly for years.  It is not an overnight job to get people to realise that it is those skills and behaviours that are contributing to their problems.  Resistance to change should be expected.  In fact, as long as resistance is overt it is a good thing because at least people are engaging and opening themselves up to discussion and the possibility of learning (Moyce, 2015).  Getting to the point where legacy IT architecture issues can be handled in the best possible way will involve many of the common aspects of organisational change – education; developing new skills; adopting different mind-sets; using multiple rather than single methodologies; and, basing the choice of method on the reality of the situation rather than on custom and practice.  The popularity of agile methods means that continuous improvement using iterative rather than step-change approaches is in vogue again.  

To summarise, resolving the problems of legacy enterprise IT system architectures can provide significant gains in productivity, efficiency, agility, and customer satisfaction.  For that reason the endeavour should be a high priority.  However, there are many risks attached and this type of work needs to be approached in a way that is highly mindful of those risks.  After all, the systems are business critical – not only to the organisation that own and operate them, but also critical to the businesses of their clients their clients.  Luckily we now have technical tools and approaches available to effect radical improvements without having to incur the expense, effort and risk of major replacement projects.  But using these tools comes with a change of mindset and approach that may be counter-cultural in some organisations.  It can mean a move away from step-change and ‘long-march’ projects, and a move towards continuous improvement.  Education and engagement will be one of the keys to making it happen. 

Cliff Moyce

13 October 2015


Cabrera, L.F., Kurt, C., and Box, D. (2004).  An introduction to the web services architecture and its specifications.  Last retrieved 30th June 2015 from 

IBM (2008).  Making change work.  Last retrieved 5th September, 2015 from

KPMG (2005).  Global IT project management survey.  Last retrieved 5th September, 2015 from

Li, S.H., Huang, S.M. Yen, D.C., and Chang, C.C. (2007).  Migrating legacy information systems to web services architecture.  Journal of Database Management, Oct-Dec 2007, 18, 4, 1-25.

Mahmoud Q.H., (2005). Service-Oriented Architecture (SOA) and Web Services: The Road to Enterprise ApplicationIntegration (EAI).

Marchand, D.A. and Pepper, J. (2015). Firms need a blueprint for building their IT systems.  Harvard Business Review (June 18, 2015).  Last retrieved 22 July 2015 from

Matei, C.M. (2012).  Modernization solution for legacy banking system: Using an open architecture.  Informatica Economica, 16, 2, 92-101.

McKinsey and Company in conjunction with the University of Oxford (2012).  Delivering large-scale IT projects on time, on budget, and on value.  Last retrieved 5th September 2015 from

Moyce, C.L. (2015).  Resistance is useful.  Management Services, 59, 2, 34-37.

Murray, J. (2013).  The composable enterprise.  Last retrieved 22nd July, 2015 from

Preimesberger, C. (2014).  Updating legacy IT systems while mitigating risks: 10 best practices.  Last retrieved 5th September, 2015 from 

Souza, B de. (2015). Enterprise architecture and the legacy conundrum.  CIO (13284045).  Last retrieved 16 July 2015 from

Serrano, N., Hernantes, J., and Gallardo, G. (2014). Service oriented architecture and legacy systems.  IEEE Software, 31, 5.

Cyber security: how can we turn the corner?

Cliff Moyce: 15 April 2016

Companies that manage data rely on customers being confident that their data (including sensitive / confidential / secret personal details) will be held safe and secure. If this backbone of trust is broken, those using their systems will simply stop doing so. This applies at both a corporate and at a consumer level. The particular sensitivities and high level of personalisation and visibility that characterise many modern enterprises make privacy vital for businesses’ continued existence. Despite the importance of customer confidence in data security, there have been several high profile cyber security breaches in the past two years in which enormous amounts of sensitive data were stolen. Hundreds of other breaches have occurred in the same period, they just haven’t made the headlines (in some cases, deliberately so). Companies that have suffered losses of customer data include JP Morgan Chase, Talk Talk, Anthem, Ashley Madison, Patreon, and LastPass. Some of the problems suffered have been so severe as to threaten the future of the company. In 2016 organisations will be keen to ensure they do not suffer the same problem, but how will they achieve that aim? One important step will be for organisations to forget the misconception that data losses are usually the result of technology weaknesses and failures. In fact, it is human failings that are far and away the most common cause of what the press often describes as ‘hacking’. Developing security policies to mitigate the people-risk in cyber security is no longer enough. In fact, it was never enough. Such policies risk being treated as tick box exercises, or are created with good intent but are undermined by a culture of poor practice. Education and training in security policies is essential – but even that can fail if the necessary culture change does not happen. This is where the most important change needs to happen in 2016 to avoid repeating the mistakes of 2014 and 2015. All employees need to be trained and examined on best-practice for cyber-security and data-protection.

One important area that is often overlooked is the risk of individuals falling victim to social engineering outside of the workplace. Their compromised status can then follow them into their organisations. It is vital that all staff understand how email attachments, phishing, and impersonations can be used to install malware devices to personal devices that are also used for work purposes. By this method, login credentials to their corporate network can be lost to ‘bad-actors’. At JP Morgan Chase it was an employee’s personal desktop computer that was infected. When that individual logged-in remotely to the corporate network via the company VPN in June 2014, the malware obtained access rights to the network. Human errors that had happened previously at JP Morgan (including forgetting to update security software on one server out of thousands) made it possible for the hackers to gain control of 90 servers and huge amounts of data, and steal large amounts of money from JP Morgan clients.

If companies invest in the right training and education for their people, it will result in a renewed faith in data security. This would be a breath of fresh air for a world that is becoming increasingly wary of modern enterprise’s ways of working. One ray of hope is that many organisations are now establishing better security standards and looking for new ways to create more private and secure methods of communication and engagement. Hopefully the outcome will be that people will start to feel more confident in using the apps and services that have so much to offer in terms of personal productivity. But will these improvements represent a triumph for everyone? Sadly, no. The unfortunate loser of tighter security and greater awareness will be the advertising industry, though possibly only temporarily. For advertisers, new security standards will mean that they have to invest in less intrusive forms of advertising. Hopefully that will eventually work for them as well as their current methods do currently.

To finish on a cliché: every problem is also an opportunity. With knowledge will come greater online security, more educated users of technology, and (even) more sophisticated advertising!

This article was published originally at on 15/4/2016

Cliff Moyce