Dodd-Frank – an unintended assault on business

[Feb, 2017]

The Dodd–Frank Wall Street Reform and Consumer Protection Act, 2010 (‘Dodd–Frank’) is a United States Federal Law that was intended to reduce levels of systemic risk in the banking sector, such that we would never again see highly leveraged balance sheet positions causing the failure of banks, and those failures causing contagion in other institutions and whole economies (as happened in 2008). 

Unfortunately, Dodd-Frank became an unintended assault on business because the capital adequacy provisions in the Act caused banks to stop lending (as loans weakened the strength of balance sheets). Loans were called in or withheld; overdrafts, materials financing and factoring agreements were not renewed; and, as a result, rates of business failures exploded.  In some regions, banks refusing to fund the growth (i.e. not the decline) of small and medium sized enterprises has become the biggest single cause of business failure. The Act and much of the other related regulation since the financial crash also caused banks to divert most of their discretionary project budgets towards regulatory compliance initiatives – including meeting the needs of more demanding regulatory reporting. 

Regardless of your personal and professional view on the value of increased regulatory reporting (Is anyone doing anything with the data?  What difference has it made? Has systemic risk actually reduced?), diverting funding away from projects that could improve products and services to customers should be a concern for everyone. It is in no-one’s interests for our banks to become moribund. 

Another highly ironic unintended consequence of the whole explosion in regulation in recent years (Dodd-Frank, Basel X, EMIR, MiFid etc) has been that it punishes small banks and financial institutions more than the big guys.  Ie the institutions that represent the biggest source of systemic risk (big banks) are impacted less negatively than those that present almost no systemic risk. This is because the smaller institutions lack the resources (people, money, skills) to do the wholesale legacy system upgrades, system rationalisations, and new systems developments needed to meet regulatory reporting and risk management requirements.

Review and reform of Dodd Frank is very much required, otherwise our western banking system, business environments and economies will be stuck in a nose dive that could become a death spiral. 

We should welcome a review of Dodd Frank and the Volcker Rule (the rule within the Act intended to stop speculative trading and investments by banks), and all other recent financial regulation.  We will not see proprietary trading in banks returning to the extent that it puts the whole institution at risk (when it does come back – and it will come back – there will be severe ring-fencing of assets at risk) but we should all want to see our businesses better financed with a wide range of financial products.

Cliff Moyce

NB this article was first published on 6 Feb 2017 in Financial IT https://financialit.net/blog/dodd-frank-unintended-assault-business

How Blockchain helps achieve more efficient regulatory compliance

[Dec 2016]

Could the speed, security, and immutability of blockchain help financial institutions achieve regulatory compliance in a more efficient manner?  There are good reasons to think so.

Blockchain technology has the potential to transform many business processes, making the data used in those processes more available, transparent, immediate and secure. It can also strip out large amounts of cost, delay, error handling, and rework.  Possible uses include trade reporting; clearing, confirmation, validation and settlement; record keeping; monitoring and surveillance; risk management; audit; management accounting; financial accounting; and, regulatory compliance including financial crime prevention. The immutability, immediacy and transparency of information captured within a blockchain means that all necessary data can be recorded in shared ledgers and made available in near real-time. In such a world, stakeholders will no longer be simple recipients of post-hoc reports; instead, they can be part of the real-time process.

By necessity, blockchain technology is complicated but the underlying idea is simple: It is a distributed ledger or database running simultaneously on many (possibly millions) of nodes that can be distributed geographically and across many organisations or individuals.  What makes blockchain unique is its cryptographically assured immutability, or irreversibility. When transactions on the ledger are grouped into blocks and written to the database they are accompanied by cryptographic verification, making it nearly impossible to alter fraudulently the state of the ledger.

Blockchain’s immutability lends itself to proof-of-process for compliance; eg keeping track of the steps required by regulation.  Recording actions and their outputs immutably in a blockchain would create an audit trail for regulators to verify compliance. Perhaps more importantly, regulators could have read-only and near real-time access into the private blockchain of financial organisations.  This would allow them to play a more proactive role and analyse information in real time, and even issue alerts and warnings automatically.  Such a change could reduce dramatically the time, effort and cost that financial institutions spend on regulatory reporting, as well as improve the quality, accuracy and confidence in the process.

A further possible extension is blockchain being used as a digital identity management grid, with all of the information required for screening and compliance being held about individuals and/or firms in a chain.  This would reduce KYC/AML (know your client / anti-money laundering) processes to simple automated checks of a blockchain-powered, market-wide utility.  

It is likely that sharing sensitive information about customers between financial organizations will start to become the norm, once trust is established in a blockchain-enabled ecosystem.  For example, SWIFT has announced that their own KYC registry — which already includes more when 1,000 member banks — will be shared with trusted partners and customers in future.  This is one of the early steps to achieving fully trusted digital identities in the industry – a so-far unachieved target for the industry.

So what are the barriers to blockchain being adopted?  Mainly they are privacy and performance issues. Using blockchain for trade reconciliation, settlement and the like would require sophisticated privacy controls and the management of access to the information residing in the blockchain.  Out of the box, private (permissioned) blockchains can provide two types of access control: read-only and read/write. Additionally, it is possible to introduce permissions to mine, receive or issue assets.  Real-world applications in capital markets and other sectors, however, require more flexible and granular access management schemas: Simply putting complete information about all transactions on a shared ledger open to anyone on the network is something no market participants could accept.  Speed is also a possible issue for the immediate adoption of blockchain in high-frequency processes. Performance with Blockchain-enabled databases is significantly slower than conventional databases due to the cryptographic component, which is very calculation intensive. But speed issues are always solved in the tech world, so this may just be a waiting game in which we migrate processes to blockchain once the technology catches up.

In summary, Blockchain technology has the potential to revolutionise many business processes in financial services and capital markets, as well as regulatory processes, bringing great benefit to the regulatory compliance profession.

Cliff Moyce, December 2016

This article was first published on the blog of the Financial Crime Prevention Association at https://fcpablog.com/2016/12/2/cliff-moyce-how-blockchain-helps-achieve-more-efficient-regu/

Cloud as a solution to legacy system problems (part 2): six steps to achieve Cloud migration

[January 9th 2017]

Cloud computing has a significant role to play in financial services as a strategic solution for resolving the problems of legacy systems. Here’s how financial institutions should approach Cloud migration.

In my previous article (The new IT experience: Cloud as a solution to legacy system problems in financial services), I argued that Cloud computing not only make sense in its own right, but it has an extra and significant role to play in financial services as a strategic solution for resolving the problems of legacy systems. The argument is that the process of migrating to Cloud will facilitate the large-scale rationalisation and decommissioning of (largely home-grown) problem systems that bedevil financial institutions. The previous article explained the benefits that can accrue from migrating to Cloud, while this article provides more information on how to undertake such a migration. As stated above, the main benefit of Cloud migration for financial institutions is the opportunity to rationalise systems, applications, databases, data sources, etc., and thus eliminate large amounts of the redundancy usually seen in such environments. By doing so, IT-based business operations will become easier and cheaper to deliver and support, as well as being more accurate, timely and secure. Data management will become easier and more reliable, while data quality can be assured in ways that are impossible with the current myriad of overlapping/duplicating (but never agreeing) data inputs and outputs. Not only do fewer systems and more accurate data make it easier and faster to make changes and fix problems, but Cloud computing is designed to facilitate truly agile development practices, with development and test servers being spun up quickly on demand (a process that can take months in other infrastructure models). Improved data management alone can be the difference between achieving and not achieving regulatory compliance. My recommended steps for planning and effecting Cloud migration are: assess current systems; design to avoid vendor lock-in; manage processes and culture; create a business case for change; avoid over-planning; and drop applications that are not Cloud-enabled.

  1. Assess current systems. Cloud migration should start with an audit of the entire systems infrastructure and a reassessment of applications and databases. An audit is a valuable opportunity to rethink the value and relevance of existing applications and decide which ones are worth modernising and reconfiguring; which ones should be replaced by new applications; and which are no longer relevant and should be retired. A major factor in rationalisation decisions is redundancy – the target should be to have only one software module for each required function (build once, use many), and for there to be only one ‘system’ providing one business service (internally or externally). This compares to the current model that will often see dozens (perhaps hundreds) of systems doing the same thing for different business segments (eg how many reporting systems or risk systems do you have in your organisation?). One factor on which to base rationalisation decisions is how easy (or not) an application can be enabled to work in the Cloud while providing the major benefits of the Cloud (scalability, availability, security, etc). Cloud enablement should not be taken for granted. Some applications will convert easily to being Cloud-enabled; while for others it will be more difficult and the business case for doing so will be weak or non-existent.
  2. Design to reduce vendor lock-in. Vendor lock-in is not in and of itself a bad thing (the vendors are providing great services), but it needs to be an active decision and not something into which you stumble unthinkingly. Avoiding lock-in and thus retaining the possibility of moving relatively easy from one major Cloud provider to another may make sense from a strategic perspective, but it can also reduce the benefits that can be realised from Cloud. There is certainly a case for going ‘all-in’ with a vendor and this maximising the benefits. For example, event-driven AWS Lambda compute functions are not portable outside AWS, but they do deliver significant benefits within AWS. By comparison, using universal instead of proprietary technologies will afford you more flexibility in future. Eg implementing your stack on an open-source PaaS (platform as a service) such as Mesosphere will make your architecture more ‘portable’, consuming only the infrastructure from the Cloud provider.
  3. Manage processes and culture. Technology develops faster than culture and processes. Opportunities offered by Cloud computing technology can give organisations incredible benefits, but only to the extent that processes and culture are supportive of the new practices. Superfast speed of infrastructure provision in Cloud can still run up against a wall of bureaucratic processes if you let it. Changing mind-sets is a huge determinant of Cloud success, and is something that is too often underrated or even overlooked.
  4. Have a business case. Don’t make migration an end in itself. Never do anything without a clear and compelling business case. Benefits you should be looking to validate and quantify include reduced capital expenditure; reduced operating costs; resources freed up to work on higher-value activities; increased development velocity; faster deployments to production; improved ease of support; reduced operational failure rates; improved data management and quality; increased transparency (monitoring, surveillance, reporting); easier to achieve regulatory compliance, etc.
  5. Don’t over-plan a long-term migration strategy. We all know that no plan survives first contact with the enemy. Instead, set goals and start moving toward them in a pragmatic and agile fashion. Remember that the environment will be changing around you (and the world of financial services has changed enormously in the past nine years), so what is important today might be less important tomorrow. Waterfall planned approaches simply store up risk for later on. Better to fail early with fewer consequences than go for big bang approaches to infrastructure change.
  6. Don’t migrate applications that are not fully Cloud-enabled. This is a big one as to do so merely replaces old problems with new ones. Continued availability of applications – one of the key reasons for migrating to Cloud – is an essential function of their Cloud readiness, and you won’t get this if the application is not enabled for Cloud. Simply having something sitting on Cloud infrastructure is not the same thing as it being Cloud-enabled.

Conclusion: Theories of evolution argue for two factors: random mutation and natural selection. The survival of organisations is also subject to these factors, except that they have to choose/engineer their own mutations while selection is done by clients and other stakeholders. In my opinion, Cloud is one of those mutations that in future will be demanded by stakeholders, from clients and shareholders to partners and regulators. As data protection, cyber security and cost-effectiveness become bigger and bigger factors in the environment, Cloud will move from being an option to being a business necessity. Knowing how to migrate to Cloud will become a differentiating factor in continued and future business success.

Cliff Moyce, January 2017.

This article was published first at Tabb Forum http://tabbforum.com/opinions/cloud-as-a-solution-to-legacy-system-problems-part-2-6-steps-to-cloud-migration 

Cloud as a solution to legacy system problems in financial services

[November, 2016]

Back in the days before centralised water and electricity, people had to dig their own wells and procure their own generators. As centralised infrastructure became available, it made sense to connect to the grid. Similarly, Cloud computing has now become that grid for business. It makes sense in its own right, but it has an extra and significant role to play in financial services and capital markets – that of strategic solution for resolving legacy systems problems.

The problems of legacy IT infrastructures in financial services and capital markets are many and manifest. On-premises, home-grown, self-managed infrastructures fail any modern objective measures of value for money, time and quality. They are expensive to operate; inflexible; opaque; hard and slow to support, enhance and test; insecure; difficult to scale; and, contain high levels of redundancy and obsolescence. When times were good, business divisions effectively (or actually) had their own IT divisions building their systems. Though it was incredibly inefficient from a corporate perspective (the corporation was often building the same systems over and over again from scratch) this “shadow IT” model allowed business units to respond quickly to opportunities and client needs (at least in theory). Since this time, budget constraints have forced financial institutions to integrate previously independent systems into something that can be operated, distributed and secured centrally. This has created a whole new set of problems. To try to fix or replace everything in the legacy systems infrastructure – and there have been many articles exhorting the industry to do just that (eg “by starting again with a clean slate”) – is to oversimplify the problem and to over-invest in the solution. The project is simply too big, expensive and risk-laden to contemplate. Further, it will not solve all the problems of building and running your own infrastructure, as it could simply replicate the model with ‘new legacy’. Yet, there is a strong need to align IT services to modern business operations, as many institutions are now facing the need to replace obsolete hardware in an IT estate that has been starved of money since the financial crash 0f 2008. Setting a strategic target of migrating infrastructure to the Cloud will force a large degree of rationalisation that might not otherwise be contemplated, thus reducing the problems of redundancy and support. It will also force adoption of an infrastructure that exemplifies best practice in all measures (flexibility, scalability, performance, security, sustainability, etc.) and will allow financial institutions to move onto more modern hardware, firmware, middleware, operating systems, databases and applications with little or no capital investment. Benefits of Cloud include:

  • Security. Although people tend to attribute security to physical possession, it is a common misconception and can be compared to holding money under a mattress as opposed to in a bank. In reality, Cloud is much more secure than any other option for infrastructure management and service delivery. The average on-premises infrastructure is penetrated multiple times a day, whereas the average big name Cloud provider may only have been penetrated a handful of times in its existence (they may claim it has never happened). Also, you cannot be as secure as you should be if any version of your firmware, middleware, operating system, databases, anti-virus, firewall, application software, etc., is not fully up to date (one high-profile, successful cyber-crime intrusion and financial theft in banking was enabled in part by a small group of servers having been overlooked for security software upgrades). Vendors and the open source community work hard to plug quickly any vulnerabilities uncovered in their products and services; but many of their customers are slow to implement essential upgrades (the worst that I have seen is a 10-year delay, but there are bound to be worse examples). The chances of an on-premises installation ticking all of the current version boxes is close to zero. Financial institutions just do not have the resources for monitoring, planning, and implementing new versions. The chances of a top Cloud vendor ticking the boxes is far higher – it is what they do for a living.
  • Cost. A big issue for enterprises is the high cost of running and maintaining IT infrastructure. A bank can spend up to 50% of its budget running IT-based business operations. Cloud computing offers near real-time, on-demand, subscription-based provisioning of almost infinite compute, storage and network resources, with the ability to scale up and down automatically, intelligently and in a matter of seconds. This provides the opportunity for huge increases in efficiency and productivity.
  • Availability. The reality of running on-premises data centres is that availability and recoverability is never guaranteed. Provided that applications are Cloud-enabled, continuous availability and disaster recovery are a function of the design and infinite horizontal scalability of the Cloud.
  • Speed of deployment. Applications designed for the Cloud and using Cloud services take significantly less time to build and deploy and are cheaper to run.

The benefits of Cloud listed above allow organisations to advance toward a more flexible, agile and data-driven model in several ways:

  • empowering agile approaches to development. By enabling self-service infrastructure acquisition, provisioning and deployment as well as elasticity and scalability, Cloud computing encourages innovation and experimentation, and speeds up continuous integration and delivery, empowering agile approaches to product, service or software development.
  • enabling data management and business intelligence. One of the bases of the digital economy – the availability of data and the ability to process data – is enabled and reinforced in the Cloud, which offers access to tools and compute power to process and consolidate Big Data and prepare them for specific tasks. Migrating data sources and data pipelines to the Cloud gives the technology team sufficient data and infrastructure elasticity to run predictive analytics and gather data-driven business intelligence.
  • improving the speed and quality of decision making.  Cloud plays a vital role in enabling faster and more informed decision making by providing broad and immediate access to data irrespective of their location, thereby reducing interdependencies between ‘information holders’. A strategy of migrating legacy infrastructure to the Cloud should have ubiquitous, transparent access to operational, process and customer data as an important objective. Current forced integrations of multiple systems (many doing a version of the same thing in an inconsistent manner) commonly create information silos.
  • freeing up resources and improving flexibility. Cloud can make an organisation ‘lighter’ and more flexible, as it allows a move from systems to services. This is especially relevant in modern heterogeneous computing environments with multi-tiered applications requiring a broad mix of technologies. Creating a ‘composable enterprise’ of software modules can become a reality if made an objective of migrating to Cloud. Resources that were previously invested in running and maintaining outdated technology can be re-directed to innovation in serving customers.

The above points are illustrated well by the global insurance group Ageas. The company adopted a Cloud-based enterprise platform that integrates the full range of back-end and front-end processes, from policy administration to claims and from finance to HR. As a result, the company’s processes are streamlined, operations agile and enterprise analytics easily accessed.

Conclusion

The financial services industry has available to it a strategic solution for large enterprise legacy IT architectures that if implemented correctly can free institutions from the limitations and implications of outdated technology. Cloud computing is much, much more than simply outsourcing the operation of your IT systems. It is a paradigm shift in how we think about and experience IT in our organisations. The benefits will be seen by customers, business users, IT developers, financial officers, operational executives, shareholders, regulators and many other stakeholders. Cloud is the future. Never forget: Every cloud has a silver lining!

Cliff Moyce, November 2016

This article was published first on Tabb Forum (www.tabbforum.com)

Cognitive analytics gives business the edge

Monday, 10 October 2016

The cognitive analytics revolution in business is underway. It is underpinned by artificial intelligence, cognitive computing and machine learning. Cognitive analytics will give business executives such as the CEO, CFO, CIO and CMO massively enhanced data- driven decision-making abilities, as well as the ability to track and learn from prior decisions. The change means that decisions can be informed by non-intuitive insights on products, services, business operations and markets (including client behaviours) drawn from a wide variety of sources. Those sources will include unstructured data such as social media posts, images, and academic documents. We have seen already how the ability to do post-hoc analyses of the economic, political and legal decisions of governments and legislatures can generate non-intuitive insights unavailable through traditional methods; now it is time for the boardroom to be doing the same.

It almost goes without saying that the use of the word ‘cognitive’ implies the continued quest in computing to create intelligent business machines that operate as per the human brain, “by reverse engineering the computational function of the brain” (Modha, D.S., 2011). Combining neural models and technologies with huge processing power can take us well beyond what any of us could achieve alone or in teams, even huge teams, with current analysis tools and techniques.

The way that cognitive analytics achieves its magic over and above current data analysis methods is through (1) ability to analyse huge amounts of unstructured data alongside traditional structured data sets; (2) ability of cognitive analytics tools to generate non-intuitive insights from data; and (3) ability for the tools to learn as they work – including how decisions suggested by the tool previously panned out when implemented (post-hoc analyses). Unstructured data that are handled well by cognitive analytics tools include emails, videos, documents, images, social media posts, academic articles etc. Cognitive computing uses natural language processing, probabilistic reasoning, machine learning and other technologies and techniques to analyse content efficiently; analyse context; and, find near real-time insights and answers hidden within massive amounts of information. Cognitive systems can adapt and get smarter over time by learning through their interactions with data and through human decision-making (including decisions suggested by the same cognitive systems). Insights provided through cognitive analytics will focus us more on the questions that we ask. These insights can help break us free from the prisons of wrong assumptions, faulty hypotheses, and the tendency to confuse symptoms with causes.

All areas of business can be supported and enhanced by cognitive analytics. These include business strategy (for example, mergers and acquisitions); product design and marketing; financial planning (from capital planning to cash management to financial control); and business operations (eg the efficient and effective deployment of resources for maximum productivity).

Financial services and capital markets have been using a form of algorithmic artificial intelligence methods for some time. Eg algorithmic trading methods using machine-learning and ‘cognitive’ (ie loosely coupled) logic to make decisions; and, predictive / trends / risk and behavioural analyses using similar methods for financial crime prevention.  Those algorithmic cognitive or quasi-cognitive approaches are also seen in wealth management ‘robo-advisory’ offerings, and will start to be seen more generally in digital banking.  In the finance function we have forecasting systems that use online analytical processing (OLAP). We also see algorithmic predictive analyses in cashflow forecasting and demand planning. What ‘real’ (ie based on neural models) cognitive analytics will give finance and business planning functions is the ability to use many data types that cannot be analysed easily currently; further and better analyses of the huge amounts of data held by the function; and, the ability to derive non-intuitive insights from data that are not being derived currently. This step-change in capability will strengthen the ability of those functions to add value to strategic and operational planning. Eg in financial control, cognitive analytics can (relatively pro-actively) highlight problems, or areas for optimisation. It can also track in real time or monitor retrospectively actual performance against financial plans, and provide feedback that companies can use to fine-tune their planning approaches. In fact, if a toolset is genuinely cognitive it should learn to fine-tune approaches itself. Similarly, the ability of marketing and product development teams to better predict consumer behaviours will reduce the risk of product failure as well as driving innovation that may not have occurred otherwise.

In summary, cognitive analytics is set to transform our ability to plan, develop and run businesses. It is genuinely transformational. Though it is not a panacea for all ills, it will help enormously with diagnosing those ills. Early adopters will be well rewarded.

Cliff Moyce

[first published at ftseglobalmarekts.com on 10 October 2016]

Login to post comments

How Blockchain Can Revolutionize Regulatory Compliance

[August 2016]

Blockchain is currently one of the hottest topics in financial services and capital markets. The technology has the potential to transform many business processes, making the data used in those processes more available, transparent, immediate and secure.  It could also strip out large amounts of cost, delay and error handling/rework.  Possible use cases include trade reporting; clearing, confirmation, validation and settlement; recordkeeping; monitoring and surveillance; risk management; audit; management and financial accounting; and regulatory compliance (including – but by no means limited to – financial crime prevention). The immutability, immediacy and transparency of information captured within a blockchain means that all necessary data can be recorded in shared ledgers and made available in near real time.  In such a world, stakeholders will no longer be simple recipients of post-hoc reports; instead they can be part of the real-time process.

Blockchain first emerged as the technology that powers the cryptocurrency bitcoin.  However, since its first appearance in 2009, blockchain’s potential uses have far exceed cryptocurrency applications.  By necessity, blockchain technology is complicated in its implementation, but the underlying idea is simple: it is a distributed ledger or database running simultaneously on many (possibly millions) of nodes that can be distributed geographically and across many organizations or individuals. What makes blockchain unique is its cryptographically assured immutability, or irreversibility.  For example, when transactions on the ledger are grouped into blocks and written to the database, they are accompanied by cryptographic verification, making it nearly impossible to alter fraudulently the state of the ledger. Another way to think about blockchain is as trust/consensus technology: the changes in the data are recorded into the blockchain when network participants agree that a transaction is legitimate in accordance with shared protocols and rules.

Interest in blockchain in financial services and capital markets continues to grow – and will accelerate as live solutions make their way to market.  Many organizations – including banks, exchanges and fintech firms – have announced initiatives in 2016, while the list of possible use cases being proposed in articles and forums is lengthening.

Applications in Compliance

One of the most exciting features of blockchain from the compliance perspective is its practical immutability: as soon as data is saved into the chain, it cannot be changed or deleted. That is why blockchain is used as the document or proof for the transfer of any digital asset, for example bitcoins or other digital currencies. By the same token, it can be used as record of ownership of physical property – an approach currently undergoing testing by Sweden’s national land survey, where a blockchain-powered system for registering and recording land titles is attempting to digitize real estate processes.  Blockchain’s immutability also lends itself to the application of proof-of-process for compliance.  Blockchain could be used to keep track of the steps required by regulation. Recording actions and their outputs immutably in a blockchain would create an audit trail for regulators to verify compliance.  Almost as importantly, regulators could have read-only, near real-time access into the private blockchain of financial organizations.  This would allow them to play a more proactive role and analyze information in real-time mode.  In other words, this brings them closer to becoming participants in – rather than customers of – the process. Such a change could reduce dramatically the time and effort (and therefore cost) that financial institutions spend on regulatory reporting, as well as improving the quality, accuracy and confidence of and in the process.

Another regulatory field where blockchain could play an important role is in KYC (know your customer) and AML (anti-money laundering). Banks and other financial institutions have to complete many tasks and steps as a part of the onboarding process for new clients. In addition to data collection, there are important rules around validation, confirmation and verification to be completed before new clients can be onboarded.  In some markets, the process can take several months.  Many of the steps could be eliminated if the information existed already in a secure, tamper-resistant database – an immutable blockchain. Any changes to customer data will be distributed to participants in the blockchain immediately. The chain would provide records of procedures and compliance activities for each client.  Blockchain would play the role of proof-of-process, so all that steps are easily traceable and regulators can be confident about the veracity of the information. Moreover, individuals would be co-custodians of the information on the blockchain, which could provide additional protection against identity theft (impacting or even disintermediating businesses like credit-monitoring services).

A further possible extension is blockchain as a digital identity management grid, with all information required for screening and compliance being held about individuals and/or firms in a chain.  This would reduce KYC/AML processes to simple automated checks of a blockchain-powered, marketwide utility.  It is likely that sharing sensitive information about customers between financial organizations will start to become the norm once trust is established in a blockchain-enabled ecosystem.  Interestingly, SWIFT has announced that their own KYC registry, which already includes more than 1,000 member banks, will be shared with trusted partners and customers in the future.  This is one of the early steps to fully trusted digital identities in the industry – which must be the target business and legal outcome.

Smart Contracts

It is hard to explore potential applications of blockchain without mentioning smart contracts. In short, smart contracts are custom, self-executing programs (distributed applications) that run on a blockchain and are triggered by some external data or event that lets them modify some other data; if certain conditions are met, a smart contract can update the blockchain according to predefined rules (e.g., transfer digital assets from one participant to another).  Once this technology gathers enough momentum, its proponents believe smart contracts will be no less revolutionary than the invention of HTML, which transformed the internet and, subsequently, the entire world economy. The appeal of smart contracts is undeniable, as they could potentially replace many functions currently executed by costly or inefficient intermediaries.  However, smart contract technology clearly isn’t ready for prime time yet, as evidenced by the recent much-publicized DAO debacle, where a poorly formulated contract allowed a savvy user of Etherium, a popular public blockchain, to obtain millions of dollars’ worth of digital currency. Smart contracts need to become much more robust to reach the comfort level necessary for widespread adoption by industry.

The smart contracts issue reminds us that with all its promise, blockchain is still quite experimental and not without its challenges with regard to the use cases being discussed in the industry.  Some of the barriers to adoption that come to mind are privacy, performance and infrastructure.  Using blockchain for trade reconciliation, settlement and the like would require sophisticated privacy controls and the management of access to the information residing in the blockchain. Originally, blockchain was designed for precisely the opposite – namely, to enable every network participant to view the entirety of the data.  With Bitcoin, for example, anyone can view the entire ledger if they wanted to. Out of the box, private (permissioned) blockchains can provide two types of access control: read-only and read/write. Additionally, it is possible to introduce permissions to mine, receive or issue assets. However, real-world applications in capital markets and other sectors require more flexible and granular access management schemas; simply putting complete information about all transactions on a shared ledger open to anyone on the network is obviously something no market participants would agree to. In a perfect world, blockchain would allow enterprise companies to map their existing LDAP (Lightweight Directory Access Protocol) users/groups in it. This is a non-trivial problem that remains unsolved at this time, to the best of our knowledge.

Challenges

Speed is often cited as a big problem for the wider adoption of blockchain. Performance of blockchains is significantly slower than conventional databases, and with good reason: the cryptographic component, which is what gives blockchain its most attractive features, is very calculation-intensive. For example, the throughput capacity of bitcoin is only around seven transactions per second. This does not compare very well, for example, to the average of 2,000 transactions per second processed by the VISA payment system, with the peak capacity of 56,000 transactions per second (although they never actually use more than about a third of this, even during peak shopping periods). There are attempts being made to build blockchains capable of higher performance. Most notably, BitShares claims the ability to handle up to 100,000 transactions per second, which would be plenty fast enough if this were an apples-to-apples comparison.  However, the definitions of performance used by BitShares in their publicized explanations seem different from the accepted norm. These comparisons are further complicated by factors like collocation and the distributed nature of blockchains, but in the grand scheme of things, for now the performance gap remains unbridged.

Setting up and managing the infrastructure to support blockchain solutions is another challenge to organizations experimenting with the technology. As information security, operations, cloud and other teams start introducing blockchain as a new data/code layer in their firms, the process can be quite disruptive, in particular because there are no best practices available that would streamline the roll-out process. There are early attempts to improve the situation, like Microsoft’s Project Bletchley or Hyperledger, but they are not yet finalized for production use.

In summary, blockchain technology has the potential to revolutionize and improve many business processes in financial services and capital markets. Of the many processes that could be improved by the technology, it is regulatory processes such as KYC and financial crime prevention (e.g., AML) that may be early converts.  If this turns out to be the case, the benefits to the industry will be enormous.

Cliff Moyce

This article first appeared in Corporate Compliance Insights, the global premier news site for compliance, ethics, audit and risk:

Service oriented architectures and web services as a solution to legacy IT problems

Cliff Moyce, October, 2015.

When computing became ubiquitous in administrative environments in the late 1980’s and early 1990’s it was welcomed as an opportunity to improve the efficiency and effectiveness of business processing.  Manual or semi-manual business processing at that time was noted for its inefficient hand-offs, checking, and duplicated effort as well as storage problems.  And yet 30 year later we look at extant (‘legacy’) IT systems architectures as representing the biggest barrier to productivity in some types of organisation.  For example, large banks now spend nearly 50% of their operating budgets on IT – and yet it is IT configured in ways that would horrify any student of process design.  Eg multiple systems (sometimes meaning twenty or thirty, not just two or three) doing the same thing; forced ‘integration’ between systems requiring software, middleware and hardware that should never have been required in the first place; inconsistencies between systems meaning reports have to take an aggregate of all outputs rather than relying a golden source, etc.  Attempts to rationalise the architecture by building a single new system to replace multiple old systems often results in yet another system being added to the pile.  Support costs are high as people struggle to manage and resolve the complexity, risk and issues.  What to do about these problems is a long running debate (eg de Souza, 2015; Preimesberger, 2014; Matei, 2012).  One approach that is often espoused is to design and implement a new, more modern architecture using a radical clean slate / blueprint style approach (eg Marchand & Pepper, 2015).  While recognising the temptation to start again, this article asserts that big-bang approaches to legacy IT systems replacement can be naive, expensive and fraught with risk. Instead, pragmatic approaches that can deliver improvements using what exists currently are preferred and recommended.  As well as discussing technologies that can enable such approaches, this article considers the cultural and organisational implications of adopting these methods.

The debate on legacy systems in some organisations is intensifying as expectations for cost efficiency, flexibility, and usability increase.  Legacy architectures are typically described in articles and presentations as unplanned; complex; poorly understood; slow and expensive to operate, support and enhance; old fashioned in their interfaces and reporting capabilities; hiding redundancy; difficult to monitor, control and recover; susceptible to security problems; and, hard to integrate with newer models and technologies such as cloud computing and mobile devices:  “Even minor changes to processes can involve rework in multiple IT systems that were originally designed as application silos” (Serrano, Hernates & Gallardo 2014).  Getting old and new applications, systems and data sources to work seamlessly can be difficult, verging on impossible.  This lack of agility means that legacy systems in their existing configuration can be barriers to improved customer service, satisfaction and retention.  In regulated sectors they can also be a barrier to achieving statutory compliance.  Pressure to replace these systems can be intensified by new competitors who are able to deploy more modern technologies from day one.

Explanations for problems associated with legacy architectures include excessive complexity arising from a post-hoc need to integrate systems that were originally designed to be autonomous; poor knowledge of systems due to lack of documentation and loss of original development teams; individual applications growing ‘like Topsy’ as new functions and modules are bolted on to meet customer demand; use of technologies, models and paradigms that are now outdated; duplication arising from multiple systems doing the same thing, etc.  ‘Local initiatives’ are sometimes argued to be partly to blame for the situation (eg Marchand & Pepper, 2015) as business lines or functions commission their own system builds or buy package implementations, perhaps with little regard to integration and support issues.  Many of these explanations for the problem could be summarised as ‘customer requirements taking precedence over architectural integrity’, but many people (especially the customers) would prefer that to the converse.  Amusing analogies such as the possible negative consequences of living in an unplanned house that has been extended many times are sometimes used to encourage audiences to take a complete re-design approach to solving the problem (Marchand & Pepper, 2015).  By such an approach it is argued that customer service can be improved and complexity, duplication and risk reduced.  These are all highly laudable and valid aims, but how easy is it to design and implement a new IT architecture in a large mature organisation with an extensive IT systems estate?  Eg in a large bank with huge real-time transaction processing demands that has grown organically, and also grown by acquisition?   Rather than the unplanned house analogy, a better analogy might be a ship at sea involved in a battle.  Imagine if you were the captain of such a ship and someone came onto the bridge to suggest that everyone stop taking action to evade the enemy and instead draw up a new design for the ship that would make evasion easier once implemented.  You might be forced to be uncharacteristically impolite for a moment before getting back to the job at hand. 

At some point, many large organisations have attempted the enterprise-wide re-design approach to resolving their legacy systems problems.  Many such initiatives are abandoned when the scale of the challenge or the impossibility of delivering against a moving target become clear.  Time has a nasty habit of refusing to stand still while you draw up your new blueprint.   Re-designing an entire architecture is not a trivial undertaking, and building / buying and implementing replacement systems will take a long time.  Long before a new architecture could ever be implemented the organisation will have launched new products and services; changed existing business processes; experienced changes to regulations; witnessed the birth of a disruptive technology; encountered new competitors; exited a particular business sector and entered others.  All of these things conspire to make your redesign invalid before it is live.  If you are lucky, you realise the futility of the approach before too much money has been spent.  Furthermore, the sort of major projects required to achieve the transformation are the sorts of projects that run notoriously high failure rates: “In just a twelve month period 49% of organizations had suffered a recent project failure” (KPMG, 2005); “Only 40% of projects met schedule, budget and quality goals” (IBM, 2008); “17% of large IT projects go so badly as to threaten the very existence of the company” (McKinsey and Company, 2012).

So if wholesale blueprinting and re-engineering is impractical, what can be done to solve the problems of legacy architectures?  The first thing to say is that trying to fix all of the problems at the same time is a logistical impossibility in anything but the smallest companies, and bears a high risk.  Many organisation would not have the resources to accommodate the large spike in project effort.  Problems always need to be tackled in priority order as there is rarely a silver bullet for the whole job.  Luckily there are some practical and cost effective approaches that can mitigate many of the problems with legacy systems while obviating the need to replace any of the systems.  Two of these approaches are service oriented architecture (SOA) and web services (Cabrera, Curt, & Box, 2004; Li, Huan, Yen & Chang, 2007; Mahmoud, 2005; Serrano et al, 2014). Used in combination, they offer an effective solution to legacy systems problem.  

SOA refers to an architectural pattern in which application components talk to each other via interfaces.  Rather than replacing multiple legacy systems it provides a messaging layer between components that allows them to co-operate to a level you would expect if everything had been designed at the same time and was running on much newer technologies.  These components not only include applications and databases, but can also be the different layers of applications.  Eg multiple presentation layers talk to SOA and SOA talks to multiple business logic layers – and thus an individual prevention layer that previously could not talk easily (if at all) to the business logic layer of another application can now do so.   

Web services aims to deliver everything over web protocols so that every service can talk to every other service using various types of web communications (WSDL, XML, SOAP etc).  Rather than relying on proprietary API’s to allow architectural components to communicate, SOA achieved through web services provides a truly open interoperable environment for co-operation between components. 

The improvements that can be achieved in an existing legacy systems architecture using SOA though webs services can be immense, and there is no need for major high risk replacement projects and significant re-engineering.  Instead organisations can focus on improving cost efficiency by removing duplication and redundancy though a process of continuous improvement, knowing that their major operations and support issues have been addressed by SOA and web services.  Another benefit is that the operations of the organisation can start to be viewed as a collection of components that can be configured quickly to provide new services even though the components were not built with the new service in mind.  This is the principle of the ‘composable enterprise’ (Murray, 2013).

But addressing the issue of legacy systems in a way that makes good sense is not just an IT issue, it is also a people issue.  It requires people to resist their natural inclination to get rid of old things and build new things in the mistaken assumption that new is always better than old.  It requires people to resist the temptation to launch ‘big deal projects’, for all of the reasons that people launch big deal projects – from genuine belief that they are required (or the only way), to it being a way of self-promotion (and everything in-between).  It requires people to take a genuinely objective view of the business case for change, while operating in a subjective environment.  It requires people to prioritise customer service over the compulsion to tidy up internally.  And, it requires the default method of change to be continuous improvement rather than step change projects – which can be counter intuitive in cultures where many employees have the words ‘project’ or ‘programme’ in their job titles.  But this is all easier said than done when you are dealing with people in a real life organisation where certain skills and behaviours have been valued highly for years.  It is not an overnight job to get people to realise that it is those skills and behaviours that are contributing to their problems.  Resistance to change should be expected.  In fact, as long as resistance is overt it is a good thing because at least people are engaging and opening themselves up to discussion and the possibility of learning (Moyce, 2015).  Getting to the point where legacy IT architecture issues can be handled in the best possible way will involve many of the common aspects of organisational change – education; developing new skills; adopting different mind-sets; using multiple rather than single methodologies; and, basing the choice of method on the reality of the situation rather than on custom and practice.  The popularity of agile methods means that continuous improvement using iterative rather than step-change approaches is in vogue again.  

To summarise, resolving the problems of legacy enterprise IT system architectures can provide significant gains in productivity, efficiency, agility, and customer satisfaction.  For that reason the endeavour should be a high priority.  However, there are many risks attached and this type of work needs to be approached in a way that is highly mindful of those risks.  After all, the systems are business critical – not only to the organisation that own and operate them, but also critical to the businesses of their clients their clients.  Luckily we now have technical tools and approaches available to effect radical improvements without having to incur the expense, effort and risk of major replacement projects.  But using these tools comes with a change of mindset and approach that may be counter-cultural in some organisations.  It can mean a move away from step-change and ‘long-march’ projects, and a move towards continuous improvement.  Education and engagement will be one of the keys to making it happen. 

Cliff Moyce

13 October 2015

References

Cabrera, L.F., Kurt, C., and Box, D. (2004).  An introduction to the web services architecture and its specifications.  Last retrieved 30th June 2015 from https://msdn.microsoft.com/en-us/library/ms996441(d=printer).aspx 

IBM (2008).  Making change work.  Last retrieved 5th September, 2015 from http://www-935.ibm.com/services/us/gbs/bus/pdf/gbe03100-usen-03-making-change-work.pdf

KPMG (2005).  Global IT project management survey.  Last retrieved 5th September, 2015 from http://www.kpmg.com.au/Portals/0/irmpmqa-global-it-pm-survey2005.pdf

Li, S.H., Huang, S.M. Yen, D.C., and Chang, C.C. (2007).  Migrating legacy information systems to web services architecture.  Journal of Database Management, Oct-Dec 2007, 18, 4, 1-25.

Mahmoud Q.H., (2005). Service-Oriented Architecture (SOA) and Web Services: The Road to Enterprise ApplicationIntegration (EAI). http://www.oracle.com/technetwork/articles/javase/soa-142870.html.

Marchand, D.A. and Pepper, J. (2015). Firms need a blueprint for building their IT systems.  Harvard Business Review (June 18, 2015).  Last retrieved 22 July 2015 from https://hbr.org/2015/06/firms-need-a-blueprint-for-building-their-it-systems

Matei, C.M. (2012).  Modernization solution for legacy banking system: Using an open architecture.  Informatica Economica, 16, 2, 92-101.

McKinsey and Company in conjunction with the University of Oxford (2012).  Delivering large-scale IT projects on time, on budget, and on value.  Last retrieved 5th September 2015 from http://www.mckinsey.com/insights/business_technology/delivering_large-scale_it_projects_on_time_on_budget_and_on_value

Moyce, C.L. (2015).  Resistance is useful.  Management Services, 59, 2, 34-37.

Murray, J. (2013).  The composable enterprise.  Last retrieved 22nd July, 2015 from http://www.adamalthus.com/blog/2013/04/04/the-composable-enterprise/

Preimesberger, C. (2014).  Updating legacy IT systems while mitigating risks: 10 best practices.  Last retrieved 5th September, 2015 from http://www.eweek.com/enterprise-apps/slideshows/updating-legacy-it-systems-while-mitigating-risks-10-best-practices.html 

Souza, B de. (2015). Enterprise architecture and the legacy conundrum.  CIO (13284045).  Last retrieved 16 July 2015 from http://www.cio.co.nz/article/563662/cio-upfront-enterprise-architecture-legacy-system-conundrum/

Serrano, N., Hernantes, J., and Gallardo, G. (2014). Service oriented architecture and legacy systems.  IEEE Software, 31, 5.

Cyber security: how can we turn the corner?

Cliff Moyce: 15 April 2016

Companies that manage data rely on customers being confident that their data (including sensitive / confidential / secret personal details) will be held safe and secure. If this backbone of trust is broken, those using their systems will simply stop doing so. This applies at both a corporate and at a consumer level. The particular sensitivities and high level of personalisation and visibility that characterise many modern enterprises make privacy vital for businesses’ continued existence. Despite the importance of customer confidence in data security, there have been several high profile cyber security breaches in the past two years in which enormous amounts of sensitive data were stolen. Hundreds of other breaches have occurred in the same period, they just haven’t made the headlines (in some cases, deliberately so). Companies that have suffered losses of customer data include JP Morgan Chase, Talk Talk, Anthem, Ashley Madison, Patreon, and LastPass. Some of the problems suffered have been so severe as to threaten the future of the company. In 2016 organisations will be keen to ensure they do not suffer the same problem, but how will they achieve that aim? One important step will be for organisations to forget the misconception that data losses are usually the result of technology weaknesses and failures. In fact, it is human failings that are far and away the most common cause of what the press often describes as ‘hacking’. Developing security policies to mitigate the people-risk in cyber security is no longer enough. In fact, it was never enough. Such policies risk being treated as tick box exercises, or are created with good intent but are undermined by a culture of poor practice. Education and training in security policies is essential – but even that can fail if the necessary culture change does not happen. This is where the most important change needs to happen in 2016 to avoid repeating the mistakes of 2014 and 2015. All employees need to be trained and examined on best-practice for cyber-security and data-protection.

One important area that is often overlooked is the risk of individuals falling victim to social engineering outside of the workplace. Their compromised status can then follow them into their organisations. It is vital that all staff understand how email attachments, phishing, and impersonations can be used to install malware devices to personal devices that are also used for work purposes. By this method, login credentials to their corporate network can be lost to ‘bad-actors’. At JP Morgan Chase it was an employee’s personal desktop computer that was infected. When that individual logged-in remotely to the corporate network via the company VPN in June 2014, the malware obtained access rights to the network. Human errors that had happened previously at JP Morgan (including forgetting to update security software on one server out of thousands) made it possible for the hackers to gain control of 90 servers and huge amounts of data, and steal large amounts of money from JP Morgan clients.

If companies invest in the right training and education for their people, it will result in a renewed faith in data security. This would be a breath of fresh air for a world that is becoming increasingly wary of modern enterprise’s ways of working. One ray of hope is that many organisations are now establishing better security standards and looking for new ways to create more private and secure methods of communication and engagement. Hopefully the outcome will be that people will start to feel more confident in using the apps and services that have so much to offer in terms of personal productivity. But will these improvements represent a triumph for everyone? Sadly, no. The unfortunate loser of tighter security and greater awareness will be the advertising industry, though possibly only temporarily. For advertisers, new security standards will mean that they have to invest in less intrusive forms of advertising. Hopefully that will eventually work for them as well as their current methods do currently.

To finish on a cliché: every problem is also an opportunity. With knowledge will come greater online security, more educated users of technology, and (even) more sophisticated advertising!

This article was published originally at http://www.techpageone.co.uk on 15/4/2016

Cliff Moyce

customer before process

I hope I can be forgiven a little anecdote about my private life in this article, which does quickly turn to the subject of business.  I recently had knee surgery to remove the broken bits caused by my years of competing at weightlifting and powerlifting (they give you a voucher for surgery with every trophy in those sports…).  At the pre-op assessment a week before the procedure they threatened to bump me to the bottom of the waiting list if I refused to confirm what had been entered onto their computer system – ie surgery was taking place on the left knee.  I couldn’t do that as it was the right knee that was broken, and in the end I had to contact the surgeon to pull rank.  The same thing happened on the day of the operation, even when I gently tried to suggest that as my right knee was clearly very swollen, I was using a walking stick on my right side, and I was clutching an MRI of my right knee, I might actually be right.  The pitying looks I got were classic.  Usefully, this got me thinking about what lessons for business could be learned from these incidents.  My conclusions are below.

The first lesson is the potential negative effects of creeping ‘processisation’ (apologies to Shakespeare!).  As a founder member of the Institute of Business Process Re-engineering I have a keen interest in how processes can make or break a business.   However, that does not mean that I value business processes over everything else.  What I saw at the hospital was experienced nurses who were in danger becoming slaves to process and who gave IT systems more respect than they are due.  Last time I looked, IT systems had not fully conquered the garbage in / garbage out problem (which is what  had happened in my case through a perfectly forgivable human error).  When I started out on my career, it was often a lack of formal documented processes that made things difficult, but these days we have processes for everything – including processes for processes (aka processes management).  That is by and large a good thing, until following a process becomes a substitute for common sense.

The second lesson is the possible negative consequences of not listening to what the customer wants; a problem is often driven by a mistaken belief that you know what the customer needs better than they do.  This has been a dominant theme in the years that I have been working on company and project rescues.  I can give one real example safely because it was a few years ago, turned out well in the end, and the directors of the company have now retired.  The company had been very successful in the engineering sector with one flagship product and some ancillary products and services.  When the founder retired, a new CEO turned up with the attitude of ‘everything is crap’ and ordered that the flagship product be replaced with something more ‘modern’.  The sales team were told to go and spread the word about this forthcoming ‘silver bullet’ product that would do everything and more…  Unfortunately the company had made three bad (nearly fatal) mistakes:

  • nobody ever asked the clients if they wanted the main product to be replaced, or whether their own production systems could accommodate a major and discontinuous change
  • realising that they were on a sticky wicket, the sale people tried to justify the change to clients by saying that the current offering was ‘broken’, ‘not modern’ etc
  • building the replacement product was outsourced and became a long running failing project that even made it into the trade press

Having been told that the current product was no good, and with no sign of the replacement product arriving any time soon, clients started to drift away.  By the time my company was asked to get involved (we specialised in company rescues) the client was already in administration. Therefore, there were lots of legal and financial issues to be resolved by my colleagues while I led a campaign to keep the current clientele.  As part of the rescue I used my project management experience to recast and deliver the new product based on true customer requirements and using the in-house personnel rather than the expensive external consultants (they had gone anyway by that stage).

I know I can seem like a stuck record when banging on about the need to focus on customer requirements, and to really listen to what they are telling you, but when so many of the problems you have resolved during your career have been caused by a failure in this regard, it does become a bit of an obsession.

BTW the nurses, doctors and surgeons did a great job and retain my undying gratitude. I hope they will forgive me for using a minor blip to make a broader point.

Cliff

Diversity in the professions

Avoiding discrimination in recruitment is both a moral responsibility for company boards and also makes good business sense.  But did you know that diversity in terms of social class and educational background is decreasing in the professions in the UK, with people educated at independent fee-paying schools now comprising 70% of finance directors, 50% of solicitors, and 45% of top civil servants (Panel on Fair Access to the Professions, 2009)?  This despite independent schools only teaching 7% of our children, and 18% of children over the age of 16 (Hensher, 2012).   As a result, working class students are in the minority at almost all English universities, with over 80% of students at the (arguably) ‘top’ nineteen Russell Group universities in England coming from fee paying schools and colleges (The Sutton Trust, 2008). Oxford and Cambridge present an even more extreme example: “Four private schools and one college get more of their students into Oxbridge than the combined efforts of 2,000 state schools and colleges” (Milburn 2012).  This bias translates directly to entry into the professions; with 82% of barristers and 78% of judges in 2005 having studied at ‘Oxbridge’ (The Sutton Trust, 2005).  I researched this topic for my masters degree at Birkbeck, interviewing lawyers in City of London law firms who had come from working class, state educated backgrounds (an increasingly rare breed).  My report is available on request.  You won’t be surprised to hear that these were the sort of people who had battled hard – and successfully – to overcome barriers that would not exist at all in a fair society.  You may be a little more surprised to hear (or perhaps not) that strong willed, encouraging mothers played a big role in many of their lives – in many cases mothers who had been denied a good education because of lack of money in the family and therefore needed to leave school at the earliest opportunity to start work (as was the case with my mother).  We may never be able to achieve the utopia of a truly fair society, but as managers and directors with responsibility for hiring people we can be aware of the problem and at least do our best to avoid propagating it.  As a hiring manager and director I have been wrestling with that problem for much of my career (you can probably guess which side of the tracks I come from), and I can offer you a few tips that might help you if you want to make a difference:

  1. The most important step is to decide that you want to make a difference by hiring fairly.  Once you make that decision, the rest follows naturally.  You may find it harder to convince your colleagues – and some colleagues will never be convinced – but don’t let that put you off.  You will also find that hiring fairly is much harder work than you realised.
  2. Look at how you recruit people and decide whether there is inbuilt discrimination (perhaps unwitting).  That doesn’t mean looking at what is written into your hiring policies, but looking at how recruitment is done in practice (the unofficial process).  Are you or your managers and team leaders rejecting applications because of university attended, ie not treating all universities as equal?  Are you rejecting on the basis of ‘A’ level grades even though there is much research showing us that two children of equal capability will get different results depending on whether they went to a state school or an independent school?  Are you insisting on a university degree of any subject – suggesting that there is no real vocational educational requirement for your empty post?
  3. Research methods of selection and assessment that evaluate the capability and potential of the person regardless of background.  Perhaps hire an organisational  psychologist who specialises in this field to advise you.
  4. Look at your supply chain.  What practices are being employed by your external recruitment agents?  Look at the adverts they are posting online.  I once found one agent posting the following at standard “If you haven’t attended a red-brick university then don’t bother applying”.  Just before I terminated our relationship I pointed out to him that there are only six red brick universities in the UK (all in England and all still true to their original ideals) but they do not include some of the universities to which I suspect he was aspiring (Cambridge, Oxford, University of London, Durham, Bath ….etc).
  5. Report on diversity by social class and educational background at your company.  Take pride in telling people that you aim to reflect the make up of UK society as a whole, and not just a privileged part of society.
  6. Support the work of The Sutton Trust educational charity in going into state schools and opening the eyes of young people to opportunities that await them at your company.

Good luck to anyone who is doing this or wants to try and do it.  I am happy to talk or correspond on the subject.

Cliff Moyce

December 2013