Articles Finance Transformation

The chart of accounts: concept & SAP design (R/3 to S/4 HANA)

The chart of account (CoA) is one of the most important structures in business. It reflects all the activities a business is involved in and it provides a foundation for the majority of financial and management reporting. Correct use of the chart of accounts can both simplify operations and improve decision making capability.

Often on accounting projects, there is a gap between accounting expertise and systems expertise, this can result in a poor CoA design. This can easily be overcome by understanding the historical context and modern-day principles that surround the CoA. We can then better understand the implementation options in systems such as SAP ERP or S/4 HANA. This article will look at three topics:

  • Part I: Accounting: history & modern principles;
  • Part II: CoA settings in SAP ERP (from R/3 to S/4 HANA);
  • Part III: Common pain points and improvement initiatives.

Part I: Accounting: history & modern principles

Ancient civilizations had accountants!

To fully appreciate the general ledger concept and the CoA we need to step back over 500 years to the origins of accounting and the first documentation of double-entry bookkeeping.

The exact origin of accounting is not known, but basic practices are evident as far back as 2800 B.C. with the Sumerians. These ancient inhabitants of Mesopotamia (modern-day Iraq) were one of the first major civilizations in the world. One of their biggest cities was Uruk; with a population of between 40,000 and 80,000 people. It’s easy to imagine this as a bustling centre for trade at the time.

The Sumerians developed a wedge-shaped script called “Cuneiform” consisting of several hundred characters that scribes would mark on wet clay and then bake. This is thought to have been used to keep records of business transactions (source). The diagram below shows an early bill of sale written in cuneiform. This record-keeping could be considered an early form of accounting.

a bill of sale written in cuneiform

A friend of Leonardo da Vinci

Accounting in the above form has been found throughout history, it’s mentioned in the Christian Bible, and the Quran.

The shift from simple record keeping to modern accounting depends on the concept of double-entry bookkeeping. It’s unclear exactly when this was first used in practice. The earliest recorded documentation is found in the following two books:

  • Della Mercatvra et del Mercante Perfetto (On Trade and the Perfect Merchant) by croatian merchant named Benedetto Cotrugli, written in 1458 (link)
  • “Summa de Arithmetica, Geometria, Propotioni et Proportionanlit (Summary of arithmetic, geometry, proportions and proportionality) by Fru Luca Pacioli; a close friend of Leonardo da Vinci, first published in Venice in 1494 (link)

The work by Pacioli is quite complete in that it describes a system of accounting that resembles closely the modern-day approach. It’s thought that a lot of what he describes was already in use by merchants and traders at the time.

Two early works documenting accounting

Double entry what?

The key principle of double-entry bookkeeping is; any business transaction creates two financial changes within a business. To illustrate:

  • Purchasing a raw material – an increase in the value of raw material, a decrease in the value of cash;
  • Selling a finished product – an increase in the value of cash, a decrease in the value of the finished product.

The two financial changes have to be equal and opposite for a transaction to balance and be complete. These financial changes are categorised into what we know as accounts. The main categories of accounts that exist in a business are: 

  • Assets – what is owned (e.g. cash, property, finished products);
  • Liabilities – what is owed to others (e.g. supplier invoices, loans);
  • Income – sources of cash (e.g. sale of products);
  • Expenses – costs incurred (e.g. rent);
  • Equity:
    1. Capital: amount invested (by owners);
    2. Reserves: profit owners receive (i.e. income – expenses).

In practice, the double entries are posted using debits and credits to the accounts. To understand debits and credits requires an understanding of the accounting equation.

The accounting equation

Consider a business startup. The amount invested by the owner will be equal to the cash assets held i.e. equity = assets. If the business then takes a loan from a bank this will represent an increase in assets (cash from the bank) and liabilities (cash owed to the bank). It can be said that equity = assets – liabilities. This is a key relationship between the account categories discussed earlier.

The accounting equation:

Equity = Assets – Liabilities.

Now if the business starts operations it will start to incur expenses and generate revenue, on a periodic basis we can calculate revenue – expense which will result in a profit or loss. This will change the value of equity i.e. equity = capital + revenue – expense. With this in mind, we can rearrange the accounting equation to:

Expanded accounting equation:

Assets + Expenses = Capital + Income + Liabilities.

Debits and credits

This accounting equation is the key to understanding debits and credits; one of the mysterious topic of accounting. Debits and credits are used to make the double entries discussed earlier.

  • A debit denotes an increase in the left-hand side of the accounting equation; assets or expenses, or a decrease in the right-hand side of the accounting equation; capital, income or liabilities;
  • A credit denotes a decrease in the left-hand side of the accounting equation; assets or expenses, or an increase in the right-hand side of the accounting equation; capital, income or liabilities.

To illustrate let’s look a manufacturing example; the purchase of raw materials:

  1. Goods & invoice received: credit the vendor (increase in liability) and debit the raw material inventory (increase in an asset)
  2. Pay the invoice: debit the vendor (decrease in a liability) and credit cash (decrease in an asset).

(those who are experienced with systems will know that in reality there are actually more steps, one of which involves a control account (GR/IR). We will ignore that for now for the sake of simplicity).

It takes time to get used to working with accounts and debits and credits. When working on accounting projects I always recommend drawing out all the accounting entries with t-accounts. With a little practice, it becomes second nature.

A simple illustration of the value of double entry book keeping

Consider a merchant in ancient Mesopotamia selling apples. A basic record-keeping approach to accounting could be a simple recording of each sale. On the other hand, a double-entry bookkeeping approach will allow them to track stock and sales in parallel.

Even in this simple example a number of benefits become apparant:

  • Stock and cash are updated at the same time. That the net of both entries should be zero provides a mathematical check that the record was correctly made;
  • A running total on stock and cash can be kept and it’s easier to make decisions on whether to change prices based on e.g. stock levels or cash targets;
  • Additional accounts could be added to advance credit to buyers and track receivables vs. cash.

From the historical context accounts were a management reporting structure

When working on accounting projects I often see confusion with the terms financial reporting vs. management reporting and internal reporting vs. external reporting. In reality, there isn’t a black and white separation between these things. Accounts are often described as an external or financial reporting structure. Sometimes they are excluded from discussions on management reporting. This is not the case. Accounts were historically developed for management purposes and form the basis of internal management reporting.

The accounting process

In his 500-year-old book Pacioli introduced the concept of the financial statements; balance sheet, income statement and cash flow. To prepare these statements we need to record all business transactions against accounts. Pacioli describes three stages of accounting:

  1. Record transactions in a journal or book of primary entry:
    • Sometimes called a subsidiary book or sub-ledger;
    • Records all transactions in chronological order;
    • Highlights two accounts affected (debit / credit);
    • Includes notes / narration;
    • Different journals/books are used for different purposes e.g. cash receipts, cash payments, purchases, sales.
  2. Transfer to a ledger or principal book:
    • Transactions are posted to separate accounts;
    • The set of accounts is known as the ledger.
  3. Summary (final accounts):
    • At certain periods the ledgers are balanced and trial balance is prepared, which is further used for calc. financial position or profit and loss.

It’s quite shocking to think that modern ERP systems such as SAP S/4 HANA still work largely in line with the steps laid out in this 500-year-old book. ERP systems such as SAP tend to have different modules or functional areas which represent the books of primary entry e.g.

  • A purchasing book;
  • A sales book;
  • A fixed asset register.

When these books of primary entry are updated the financials are transferred to the principal book or general ledger. The main advantage of ERP is the integrated design which makes this transfer occur in real-time.

Modern day accounting

Accounting has grown in complexity over the years and many organisations have hundreds or in some cases thousands of accounts, there are plenty of valid reasons for growing no. of accounts, a few examples:

  • As trade grew in volume the need to break up business transactions into more detailed categories for analysis and reporting grew;
  • With the advent and increase in statutory and regulatory reporting a number of mandated categories of reporting appeared;
  • With the development of enterprise systems the ability to capture more transactional detail and run higher volume transactional businesses evolved and these systems came to significantly influence how the CoA works.

The first step in optimising the chart of accounts is being clear about the role of accounts. Accounts are often described as a structure for external reporting, with different structures used for internal reporting. This is a misleading simplification.

I propose that it’s better to think of the CoA as the foundation for all financial information whether that reporting is for external users or internal users, compliance or decision making.

The balance and line items on the accounts can then be further analysed by other dimensions which cover factor such as:

  • Team or department
  • Business unit
  • Site (e.g. factory, warehouse or headquarters)
  • Brand
  • Product
  • Responsible person (e.g. director with profit and loss responsibility)

The key to a good CoA design is being very clear about the purpose of, and usage of accounts vs. other structures and how it fits together to provide a full set of financial and management reporting.

A common mistake on accounting projects is to set up each structure; legal entities, CoA, cost centers, profit centers etc. in a silo-based on basic instructions from software vendors. These structures need to be designed in an integrated way with a view to how they will interact to provide reporting and analysis.

 Accounting bodies

Modern-day accounting is governed by various bodies, the key ones to be aware of are:

  • US: The financial accounting standards board (FASB) issues financial accounting standards (FAS) which comprise US GAAP;
  • UK: The financial reporting council (FRC); with its subsidiary the accounting standards board (ASB), sets financial reporting standards (FRS) which comprise UK GAAP (more or less aligned to IFRS now);
  • International: Originating from a joint effort of Australia, Canada, France, Germany, Mexico, the Netherlands, the UK and the US, an international accounting standards committee (IASC) was formed and issued international accounting standards (IAS). In 2001 the international accounting standards board (IASB) issues international financial reporting standards (IFRS).

Outside of the U.S. it’s generally best to be familiar with IAS / IFRS and supplement that with local GAAP if and when working in a country that is not closely aligned to the international standards.

We need to be aware of the different standards as they will impact a few factors relating to the CoA:

  • The accounts we have;
  • The number and name of the accounts (some countries mandate specifics);
  • The way we post to the accounts; principles of valuation.

For organisations that operate across multiple countries they may need to maintain more than one CoA and produce reports according to more than one standard.

It’s important to note that the accounting standards to not represent an exact set of rules that can be programmed into a system. Accounting works on principles and requires interpretation. This is why we come across terms such as “fair representation”, “comparability”, and “materiality” in accounting.

This means that the exact details of transactions as they are captured are often not appropriate for external reporting. Accountants need to strike a balance of presenting information in a true and fair way, but a way that also benefits the company and it’s shareholders. This means there is always interpretation and consideration and potentially adjustment before reporting.

Accounting standards

A useful resource for accounting information in the UK is the Institute of Chartered Accountants for England and Wales (ICAEW). The ICAEW have a reference list of model accounts.

Another useful reference is maintainted by Deloitte.

I would recommend a skim read of IAS 1 – presentation of financial statements.

IAS 1 lists the financial account as taking the form of:

  • A statement of financial position
  • A statement of profit and loss or comprehensive income
  • A statement of changes in equity
  • A statement of cash flow
  • Notes

Two pictures from IAS 1 follow as illustrations of how they describe the content of the statement of financial positions and comprehensive income:

Multiple chart of accounts

Organisations that operate across multiple legal entities and/or countries will often require more than one chart of accounts, to illustrate these scenarios:

  • Organisations with more than one legal entity will need to consolidate their financial information at a ‘group’ level. With this in mind, they may have a different chart of accounts at the group level than at the legal entity level.
  • Organisations that operate across different countries will have multiple legal entities and in addition to the need for legal entity level chart of accounts and a group chart of accounts they may also have to deal with the legal entity CoA being different based on different accounting standards.
  • Even if an organisation has only one legal entity the accounts required to execute all business transactions at the operational level are more numerous than the accounts required to be shown on the ultimate external statements:
    • There may be reconciliation accounts or other accounts required by the way systems work
    • Accounts may be used to breakdown info. for management reporting but not be required for statutory reporting.

The general ledger code block

In accounting systems we ‘post’ transactions to the general ledger. When this happens more than just the amount is captured. The information recorded is sometimes referred to as the GL code block. Basic examples include:

  • The legal entity;
  • The account;
  • The date;
  • Whether it’s a debit or credit;
  • The amount;
  • The user who posted it.

In addition to this other information relating to the original transaction may be captured. This can be information that is useful for management reporting. Examples include; department, brand, fixed asset, product etc.

Part II: CoA settings in SAP ERP (from R/3 to S/4 HANA)

I recommend starting by reading my post on the difference between R/3 and S/4 HANA which provides additional context for some of the topics covered in this section.

SAP versions

The chart of accounts is part of the finance general ledger component of SAP. The structure and naming of the modules has changed over recent versions, highlights include:

R/3 started with the FI – finance module which included GL. This is connected to a separate CO – controlling module for additional management reporting. As of ECC 6.0, it was possible to activate NewGL; a simplification and evolution of FI and CO. As the HANA platform was introduced simple finance became available. Finance has then gone through slightly different namings as S/4 HANA has delivered further simplification and enhancements.

Despite different versions and names, elements of FI and CO are still present in the latest release. The latest release should be considered as a simplification and evolution rather than a totally different system.

R/3 and ERP modules

SAP systems are broken down into modules and components. These are separate sets of tables and programs that deal with particular sets of activities. A rough illustration of R/3 and ERP could look like this:

  • Within FI we have GL as a central component;
  • All the components in grey can be considered sub-ledgers from a financial perspective. Some; such as materials management are separate modules outside of finance. Others; such as asset accounting, are part of FI. These all post to the general ledger;
  • The GL is connected to controlling via cost element accounting for additional management reporting capability. Controlling components are shown in green;
  • A noteworthy component is FI – special ledger; shown in yellow. The special ledger is a separately configurable ledger that can collect data from various application components.

Configuring FI – setting up the CoA and accounts

We configure SAP using the implementation guide (IMG). In this section I will highlight key steps and structures relevant to the CoA. I won’t step through the implementation guide. There are plenty of good books and help guides that walk through the details of the configuration step by step.

A note on the instance and client

We install SAP as an instance, within the instance we can define multiple clients:

  • All programs and a few configurations are common across an instance;
  • The majority of configurations can be defined independently in a client.

Everthing from here on will be within one client.

Define key financial structures

1. Chart of accounts: The first step is to create the chart of accounts. This can be created, copied from an SAP template or imported. If copying this will can copy the CoA, all the G/L accounts and other settings.

From a technical perspective starting by copying the SAP template is a good idea as it can simplify configuration. However it’s critical to define an optimal CoA for your own business therefore I recommend extensive review and adjustment of any template.

2. Fiscal year variant: This defines the no. of posting periods in a year, typically 12 regular (1 per month) and 4 special.

3. Posting period variant: This defines which periods are open for posting.

4. Create company codes: A company code usually represents a separate legal entity. A full set of accounting records can be produced at the company code level – balance sheet, income statement, including tax.

It’s important to note that there are challenges and difficulties getting a full set of accounts at a level below company code. This should be a key factor in considering the right structure for your business. This changes slightly in NewGL we will see later.

5. Create account groups: in line with the way financial statements are structured, accounts are categorised by type using account groups. These account groups let us control what type of posting the account can receive and what information is collected. Typically we create account groups for accounts such as assets, liabilities, revenue and expenses etc.

6. GL accounts – accounts can initially be created centrally with basic information. At this level they can’t be posted to.

7. GL accounts are then activated per company code and additional settings are added to control postings within that company code.

When setting up the company code, CoA, account groups and creating accounts there are various configuration points that control the information captured in GL postings and the fields that appear on the transaction screens. This can be seen working through the implementation guide step by step.

Other key factors closely connected to the CoA in FI

Currencies – traditionally in R/3 and ECC it’s possible to track several currencies:

  • Transaction currency;
  • Company code currency;
  • 2 additional currencies e.g. hard currency or group currency.

Within financials there are also two additional reporting dimensions that form part of the company code – CoA – account group – accounts set up, these are:

Business area – originally designed to provide a cross-company code view on the financial statements. Note that it is hard to reconcile business area to company code, this can make use of them difficult. BA can be generated based on things like a plant, sales area, cost centre or fixed assets etc.

Functional areas – the idea is to split the view of accounts by functions, an example is having one GL account for labour and using different functional areas for sales, R&Dm marketing, production etc. This is closely connected to management reporting through controlling.

The link between financials and controlling

As this article is focussed on the CoA I won’t go into the details of management reporting in controlling. However the CoA is used as the link between FI and CO. A very brief explanation:

  • To re-cap FI uses accounts to capture summarised information on business transactions e.g. amount, business area etc;
  • CO is a separate module which captures and manages additional data on management structures e.g. cost centers, internal orders, profit centers;
  • If a transaction has a financial and cost management impact then documents are posted in both FI and CO;
  • These documents are connected by cost elements. A cost element is created based on identifying a GL account being relevant for profit / loss.

Issues with GL in R/3 and ECC

As can be seen above the basic structure related to the CoA in R/3 is not complicated. Having worked through a number of R/3 and ECC implementations the challenges I have seen with design and set up include:

  • Account concept not correctly implemented. For example, rather than having one account for labour costs and using cost centres to split labour cost by department, we have one labour account per department;
  • Inability to handle multiple valuation requirements for regional or global projects;
  • Limitations with data that could be tracked in the GL especially within. For example in financial services it’s often important to track sub-ledger information such as policy or agreement number;
  • Limitations with the no. of currencies;
  • Inability to get a full set of accounts below company code level e.g. in the case of monitoring financials for a manufacturing site.

Enhancements with NewGL and S/4 HANA improve the ability to cater to several of these. However, it’s still key to design and implement the correct concept for accounts.

Parallel valuation

The ability to meet record and report multiple according to multiple accounting standards is an important topic. In R/3 there were three ways to do this:

  • Use additional accounts (creating extra accounts)
  • Use an additional ledger (using special ledger)
  • Use an additional company code

None of these were ideal, each creating some additional complexity and effort. The second option uses Special Purpose Ledger. FI-SL is a separate application where ledgers can be defined for reporting purposes.

ECC 6.0 – NewGL

As part of ECC 6.0, SAP introduced NewGL. This is a step in the right direction for FI resolving a number of key issues.

Within NewGL company codes, a CoA, account groups and accounts are defined as before, but there are also additional simplifications and enhancements. These include:

Parallel accounting: NewGL allows for the specification of a leading ledger and non-leading ledgers. This makes it possible to handle parallel valuation without having to rely on the accounts approach or special ledger. A leading ledger can be defined according to the group accounting standard (e.g. IFRS) and a non-leading leger can be defined for local standards (e.g. local GAAP) and these ledgers can be used to track the transactions that have to be valued in different ways.

IAS 14 Segmentation: International accounting standard 14 brought with it the requirement to split accounts by business segment or geography. This means that a company code may now need to provide a full set of financial statements at a lower level. NewGL added the new field, “segment” to the GL code block. Segment can be updated based on profit centre.

Document splitting: Also connected to IAS 14 it’s possible to get a full set of accounts at a level below company code by using document splitting. Prior to NewGL a company may have wanted to see a full set of accounts by a dimension such as profit centre. It was possible to have profit centre included in the majority of account postings, but not all e.g. tax account postings don’t include any account assignments. Document splitting essentially forces at the time of posting the account assignment to be included on every account line in a document.

Customer fields: Addition of a number of customer-defined fields to the GL code block.

More info on SAP help

Note that NewGL also involved simplifications to the underlying FICO tables and the way FI and CO reconcile, which I won’t cover here.



One of the biggest changes S/4 HANA brings is the Fiori front end and a new approach to user experience on desktop/tablet and mobile. It’s now possible to customise a launchpad to the role of the G/L accountants working with the CoA. A range of apps are available. One of the new Fiori apps I noticed in S/4 HANA 1909 that I like is the t-account view on account postings:

T-accounts make it much easier to understand debits and credits at a glance. To browse Fiori apps use the app library.

Universal journal

The universal journal is one of the biggest changes to FICO. Enabled by the HANA platform, SAP have been able to rationalise the table design in SAP.

A new table; ACDOCA, and set of journal transactions allow GL entries to be made as a single source of info, including e.g. cost centers, internal orders, WBS elements, CO-PA characteristics and other info. from other modules.

Extension ledgers

In NewGL SAP introduced leading and non-leading ledgers to cover requirements for parallel valuation. Extension ledgers are a continuation of advancements in this space. The benefits of extension ledgers being that they only capture entries that are different for the multiple valuations in question. I came across a good blog post from Martin Schmidt on extension ledgers.

8 definable currencies

SAP has increased the no. of currencies that can captured in G/L postings.

Table simplification & compatability views 

With the inclusion of the new universal journal; and table ACDOCA, a lot tables have been eliminated. To avoid the need to re-design a lot of historical functionality old programs can access ACDOCA through ‘compatibility views’.

The latest version of S/4 HANA at the time of writing is 1909. A summary of the new features can be found on the product page on SAP help. Both the features and scope and the simplification list can be found there.

There are many other changes that come with S/4 HANA, the above represents only the key highlights from a CoA perspective.

Part III: Common pain points and improvement intiatives

After summarising the concept and SAP design implications for the CoA, I’d like to summarise some of the common pain points, and guidelines on improvement initiatives.

Pain points

1. Dealing with multiple chart of accounts

There is a valid reason to have alternative accounts to cater to multiple accounting standards (parallel valuation), however organisations often have multiple accounts due to other reasons:

  • No central governance of finance; individual business units or countries freely configure their own systems;
  • Central finance governance in place, but implementation in systems is not controlled; leading to variation;
  • Making acquisitions w/out carrying out full integration.

This can lead to the following pain points:

  • No standard financial language across the business, no common way to refer to the financial impact of business transactions.
  • Mapping needs to be maintained for group consolidation;
  • Interpretation is difficult at the group level as original postings are made to a different account structure.

2. CoA not well aligned to the financial structure

The operational CoA may not be well aligned to the financial statements that need to be prepared at a group level. This can happen in a few ways:

  • Too many accounts are created to reflect not only financial transaction type but also departments, teams or products;
  • Accounts are not well named or described and no guidance is available on correct usage.

This can make it difficult to maintain a mapping to the financial statements.

3. Poor quality accounting policy

  • Transactions entered against inappropriate accounts; leading to statutory reporting being misleading or business performance being misinterpreted;
  • Incorrect valuation methods, approvals, materiality limits etc. applied to certain postings.

4. Accounts not governed to meet changing requirements

  • New statutory or regulatory requirements met using workarounds with existing accounts;
  • No longer required accounts still used – missed opportunity to streamline transaction capture, close and reporting.

5. Different CoA accross different systems

  • No ability to ‘drill-down’ from consolidated reports to originating transactions – increased time and reduced transparency to queries

6. CoA not used as ‘main basis’ for management and regulatory reporting as well as statutory

As the CoA is primarily finance owned (statutory), management and regulatory reporting needs are given secondary status:

  • Parallel similar structures maintained in management reporting tools but not well aligned to the financial CoA;
  • The effort to reconcile statutory, management and regulatory numbers is increased;
  • Potential confusion between stakeholders on ‘correct final numbers’ versus various estimate, flash etc.

7. Too many accounts

Excessive number of accounts; excessive use of ‘nice to have’, excessive detail, creates difficulties in:

  • Identifying right account for posting;
  • Maintaining controls and policy;
  • Interpreting account balances.

8. Account design based on systems

Software companies may provide a sample CoA, however, they are not experts in an individual business. Blindly following the logic of a system from an accounting perspective can provide an inefficient structure for financial and management reporting.

9. Effective flow of numbers, but lack of contextual information

The process of recording transactions through to preparing financial statements is heavily based on numbers coded with data dimensions, careful consideration needs to be placed on how commentary for business analysis fits with this flow on key transactions.

This is the biggest gap I’ve seen with accounting systems. None of them provide a good solution to capturing contextual information at the point of transaction entry and carrying that through to periodic analysis. This is not necessarily a bit issue in industries such as manufacturing where the structures in product cost control make context less important. However in financial services this can be critical.

10. Extent of usage of GL code block

The number of dimensions captured in a G/L posting is an important design decision. Capturing just a few ‘management reporting’ dimensions will reduce the volume of data stored in the system and the complexity of individual postings. Normally a full set of accounts can only be prepared by legal entity/company code. If a full set of accounts is required at a lower level e.g.; plant, segment, profit centre etc., then these details need to be captured in every line item of every G/L posting. If a full set of accounts is not required, sometimes this reporting is better delivered from alternate reporting structures. This is less of a concern today than it was in the past due to the improved performance of business systems such as S/4HANA.

Considerations for good practice

Discussion of ‘best practice’ is not necessarily useful based on the different requirements across enterprises by industry, size, focus etc. 

And with the 80/20 rule in mind, it’s often better to focus on eliminating major pain points and pursuing the more obvious elements of ‘good practice’.

With that in mind, a few examples of good practice include:

  • The volume of accounts reflect a sensible view of level of detail that needs to be captured in order to meet statutory and regulatory requirements and support the management and regulatory processes;
  • The usage and control requirements of each account is clearly defined;
  • The majority of transactions are automatically posted to the general ledger based on originating entries in business systems e.g. financial trades, invoices etc.;
  • Use of manual accounts are minimised;
  • Use of reconciliation accounts are minimised to those truly required;
  • The Chart of Accounts and associated GL code block has a systems agnostic basis meaning that any change to the IT landscape does not lead to extra complexity and acquisitions and divestitures can be easily handled;
  • A clear strategy should be in place to handle multiple valuation methods e.g. different depreciation rules in different countries. Depending on systems there are various approaches from multiple accounts to multiple ledgers, as this adds complexity the right solution should be carefully identified;
  • A limited number of manual adjustments at period-end close. Adjustments logged and reason for adjustment clearly documented. Accounting policy and CoA design constantly reviewed in order to reduce required adjustments;
  • Group and operational CoA closely aligned, similar financial language at all levels;
  • The CoA is designed in accordance with an overall conceptual data/information model which clearly defines how accounts vs. other objects work e.g. profit centres, countries, business areas etc.

Structure of the CoA & conceptual data models

When it comes to the structure of the Chart of Accounts there are some reasonably well established good practices, which include:

  • Follow the structure of the financial statements;
  • Within that follow a natural order of importance to to the business (consider Account Balance);
  • Set number ranges to follow the structure, avoid alphanumeric, leave gaps for future accounts;
  • Clearly capture requirements – statutory & management;
  • Create a a conceptual data model to layout how other dimensions are used, this is helpful in avoiding the creation of accounts which duplicate the function of other dimensions e.g. accounts for departments vs. transaction types;
  • Clearly identify and limit the use of special accounts for manual control or reconciliation purposes;
  • Avoid accounts for system reasons as far as possible, look at other control methods.

A conceptual information model is also extremely useful. This isn’t a formal technical data model, but rather a simple matrix which shows by account/value/KPI which dimension needs to be tracked.

This is a useful way to decide what information needs to be captured within the general ledger vs. what will be captured and recorded via other reports. 

A classic example is where a full set of accounts are needed e.g. by IFRS Segment, this has to be captured in the GL, however, a variety of sales-related reporting could be provided directly from the sales systems e.g. sales by salesperson or sales organisation.

This is an excellent tool to align stakeholders and to cross-reference with report requirements from individual reports.


One of the major factors that separate the more effective organisations from the rest is governance; this is particularly true when it comes to managing hierarchies, master data, processes, systems etc. Chart of Accounts may be a complex area, but if well governed, it can be effectively managed. Key governance considerations include:

  • Move towards one master reference CoA;
  • Maintain the master CoA in one system;
  • Assign a centre of excellence owner for the CoA with approval on create / update etc.;
  • Create a policy for the CoA including principles for account creation and usage;
  • Formalise  workflow for CoA maintenance with appropriate approvals;
  • Provide training to non-finance users who have ‘posting’ contact with the general ledger e.g. purchasing, sales, payroll to ensure they understand how their data relates to the GL.

In some circles, a highly flexible CoA is recommended, in others a highly controlled CoA. The truth is that each enterprise; in particular with respect to different industries will have different requirements. A balance has to be made in ensuring the CoA is as simple as possible, and transparent, but in addition to this, it provides the required information for statutory, regulatory and management reporting and decision making. Within management reporting, the planning and budgeting process should also be considered.

Project approach to improve the CoA

Continuous improvement and big bang approaches are both valid to improve the CoA. As with many finance transformation projects care should be taken around year-to-date and current vs. previous period reporting:

  • Changes to the CoA during fiscal year reporting may confuse year to date reporting and may require manual mapping at period-end;
  • Changing at fiscal year-end will not affect year to date, but will affect the current vs. past period analysis particularly at year-end.

There are different steps to work through, one suggestion is to 1) start with requirements and b) analysis of existing issues. A simple illustration:


There are a number of other systems worth considering which relate to the CoA and accounts, these include:

  • Integration layers that connect sub-ledgers with G/L (especially in financial services);
  • Consolidation engines e.g. SAP BCS, BPC, S/4 HANA Central finance;
  • BI tools (a vast array of financial and management analysis and reporting);
  • Add ons such as blackline;
  • Master data management tools.

However regardless of the systems used the same design conceptual design and governance points needs to be considered.

As I was writing this I was wondering about the experience of other people with the CoA:

  • What challenges have you faced with the CoA in your organisation?
  • What features of ERP around CoA do you find most useful?

For the future, technologies such as cloud and AI provide potential to better analyse how we use the CoA and post transactions. However, one area that’s harder to analyse is the gap between “system produced accounts” and “published accounts” where there is a significant amount of interpretation and adjustment. Automatic generation of summary commentary using NLP based on original documents might be an interesting concept for the future.

Articles Finance Transformation

A simple guide to cost reduction

1) Cost reduction and CSR

Whenever discussing cost reduction it’s important to consider corporate social responsibility as a starting point. Ideally, companies can find the right balance between reducing costs and thinking about employee and business partner (e.g. supplier) impacts. Employee layoffs can have a devastating effect on individuals. Business partner changes can put companies out of business. When planning and executing cost reduction these impacts should be considered. In real life, this might mean trying to offer employees other options (reduced hours / different contract terms / new profit-generating roles) or business partners a revised agreement.

2) A model for cost reduction

A good starting point for cost reduction is to consider the type of costs incurred within the business. Accounting provides a consistent way to look at this. Costs are either tied directly to production or not. For example; in a manufacturing environment there are factories with machines, operated by people to produce products. Cost accounting is used to calculate ‘cost of goods sold (COGS)’. This is the cost that can be directly tied to converting the input materials to the finished product. In this case by adding the raw material, labour and utility costs.

figure 1: cost structure

In addition to COGS there are operating expenses. They include the entire cost of departments such as accounting, human resources and IT. They also includes the costs of headquaters or sales offices. The majority of operating expenses are sales, general and administrative costs (SG&A). This is sometimes further broken down to sales costs (S&A) e.g. advertising campaigns and general and administrative costs (G&A) e.g. rent and utilities.

In service industries there are no COGS so SG&A is even more important.

This structure will be useful in planning cost reduction initiatives. Each cost area can be considered in turn. Typically SG&A is seen as one of the top priorities of cost reduction as it’s somewhat independent of production and sales volume.

Approach to cost reduction

A basic approach to cost reduction that many organisations follow is to set blanket cost-cutting targets across the entire enterprise. Why? – perhaps because identifying the best place to cut costs requires a lot of analysis, thought and difficult decisions. Setting blanket targets across seems to be an easy way out and hands the responsibility to individual budget owners.

Costs are normally managed as part of annual budgets. This annual budget setting process is complicated. It can take as long as 6 to 9 months to set budgets. Typically the executive set high-level targets on sales, margin and costs. These are then filtered down through management layers to individual budget owners. There is then a back and forth that can continue for several months to agree on the budgets. The targets are often moving during this process.

Budgeting is often based on previous year actuals. If the executive takes last years budgets and asks each team to cut costs by 5%, each budget owner will try to negotiate a lower % cut. This leads to an allocation of investment based on the individual budget owners ability to justify and negotiate. The company might make it’s 5% target, but at the cost of reducing investment in the wrong areas.

A better approach to cost reduction would be to start from strategy and think carefully about the right areas to reduce costs.

During this strategy review the executive can carefully consider the products / services and markets and the different cost categories present across the business. An approach would be to build a matrix by cost category and product / service and answer some questions:

  • How closely tied is the cost to sales and production;
  • Has cost been challenged or reviewed in that area previously;
  • Is that department a key dependency for high volume or highly profitable products / services or customers segments;
  • How easy is it to make changes to the area in question;
  • etc.

This kind of analysis would be useful for deliberating the right focus areas for cost reduction. These will differ based on industry / market etc. Another useful tool is to apply zero-based budgeting. This will move away from assuming previous year budgets are the right starting point.

A word on benchmarking

Benchmarking is a common tool which helps to reduce costs. Ratios of different cost to revenue factors are often used to get a broad idea of whether a particular part of a company is efficient and shows value for money. The cost of the finance function as a percentage of revenue is an example of this.

But benchmarking can also be used in more granular ways. Two examples:

  • Checking salary rates against competitors – easier than ever with the advent of online comparison tools;
  • Internal benchmarking of departments / functions against one another, this can be done with sales offices, manufacturing sites etc. Where the benchmarked areas have slight differences complexity measures can be identified to adjust the benchmarks.

Ten ways to cut costs

Over the years I’ve seen or been involved in several cost-related programs. Here are ten areas where I’ve seen good results.

1. Focus on core products / services

My first employer; Procter & Gamble, is a good story of the importance of focussing on core products and services. Around 2000 P&G were struggling. They had become ‘fat’. Too many new products and services and a large unorganised support function (high SG&A costs). A.G. Lafley joined the company in 2000 and two key strategies helped with the recovery:

  • Focus on core brands/products;
  • Build an efficient back office (simplify, standardise, centralise).

This needs to be carefully planned and will result in a high degree of mid and long term benefits.

2. Cancel projects

The success and cost rates of projects is rarely a focal point of strategy. Every company should have a central PMO that monitors the benefits to cost ratio for all programmes. This is a low effort / light touch PMO. Projects can be expensive, the key is to carefully control project budgets and stop or re-direct them quickly.

3. Create a central procurement organisation

For large organisation scale can be leveraged by procuring centrally. This goes for everything from raw materials to pencils. Rather than individual facilities/locations/teams making their own purchases, all purchases can be handled by a central team. This is an opportunity to buy at scale and also build a procurement and negotiation centre of excellence.

4. Centralise operations – finance, IT, HR, legal, marketing etc.

If staff are decentralised across locations there will be an opportunity to improve rent and utilities costs as well as reduce overall management effort by centralising teams. This can be done for all staff not directly tied to production or field sales. Centralisation has a lot of additional benefits for culture, quality and controls. Centralising functions will also have a knock-on effect by simplifying requirements from IT, HR and property services.

5. Offshoring

Offshoring whether based on a captive or outsourced model can massively reduce labour cost. I’ve seen savings off over 50% on labour, this can be a huge proportion of SG&A, particularly in service industries. This does require detailed planning and very careful execution. However, it’s easy to calculate potential savings. First, calculate a blended fully loaded cost for the functions you are considering e.g. finance staff, and then calculate the equivalent for any target country of interest. The information to calculate this is easily available freely online.

Outsourcing has somewhat of a bad reputation. Based on my experience this is normally due to poor execution. Most of the time the companies squeeze the outsourcing supplier too far on price and hence receive poor service levels. Cost reduction should always be balanced with quality.

6. Review IT licensing

One of the patterns in IT over the last decades has been an ever-increasing number of technology products, services and suppliers. It’s worthwhile to consider rationalisation in this space. Is each application needed? Do we have the right number of licenses? Can terms be renegotiated?

7. Deep dive into sales costs

For companies with field sales there may be opportunities to reduce costs. The key factor is to figure out what costs drive success in sales. Travel, events, conferences etc. should be carefully analysed to understand how much they impact sales. Moving events online, or reducing budget for entertainment can result in significant savings.

8. Deep dive into employee costs

Layoffs are one option to save on employees, but there are also opportunities to reduce labour costs. Is the salary banding simple and efficient? Are the package related costs (benefits) good value? Is there an opportunity to change contract structures? Would employees consider a 4-day week?

9. Discounts

Discounting can easily be overlooked on cost reduction initiatives as it happens at the point of sale. Discounts can total up to have a large impact on margin. This is true especially in industries like Pharmaceuticals where complex pricing structures are used.

10. Process improvement

Methods such a Lean culture and tools such as automation (RPA) can be key drivers of effort reduction, however, these do not reduce costs on their own. There must be layoffs or re-assignment of people to revenue-generating roles.

What the experts say

I’ve taken a look at some of the leaders in strategy and management to see what they have recently published on cost reduction.


McKinsey recently wrote this article about cost reduction, noting that it’s a top priority for most corporations and observing that cost targets seem to be fairly unfocused. The following diagram is a useful illustration of how organisations normally set blanket targets:

Source: McKinsey


Bain currently have a webinar that looks at zero-based budgeting and five key themes of cost reduction. They also have an insight article that covers the less widely focussed on fixed product costs together with sustainability. It’s interesting to consider how a move to more sustainable packaging can cut costs. This is a good example of considering cost and CSR together:

Source: Bain

Bain also have an article focussing on where to cut costs. They recommend focussing strategically on the right areas of the business. The ‘where to play’ and ‘how to win’ diagrams are useful and are normally an effective way to structure the discussion about the products / services and market segments to focus on and invest in vs. the ones to consider downsizing or as they note below divesting.

Source: Bain

Strategy& (PwC)

One of the first insights posted on Strategy& includes a downloadable PDF where they share a fairly comprehensive plan and approach to tackling cost. At a glance, this includes a starting point of thinking clearly about strategy and the market and then moving onto execution. I like the categorisation of short term, midterm and long term actions.

Source: Strategy&

cover graphic by

Articles Project management

Improving project management by focussing on strategy and people

Many of the projects I’ve come into contact with over the years run into problems that stem from a lack of focus in two areas. A lack of ability to effectively deal with people. And a lack of depth of knowledge of strategy.

I’ve been around projects for almost 20 years. Working in roles such as systems analyst, business analyst, project manager, program manager, management consultant and change manager. During this time I’ve seen a wide variety of methods and tools applied, but regardless of those, there are consistent underlying problems.

The profile of the project manager is critical to a projects success. What experience do they have? what do they focus on? what tools do they use? how do they manage conflict? a good project manager needs a wide variety of skills.

If I were to estimate I would say around 1 in 5 projects that I come into contact with have good, multi-faceted project managers. The good news is we can all become better project managers and in this article I’d like to address one of the biggest opportunity areas for improvement.

I’ll start from where project managers are strong; methods and tools. Project management seems to attract people who are organised and enjoy structure and detail. They feel confident about studying approaches and tools and applying them to their projects. I often see a huge focus on topics such as:

  • Documenting the project e.g. writing scope statements;
  • Drafting plans including the very popular GANTT chart;
  • Preparing lists; issue lists, problem lists, change lists, stakeholder lists etc.

These are very important. However, having these in place doesn’t mean a project will be successful. In some cases, an over-reliance on these may cause problems.

The areas I don’t see enough focus on are strategy and people.


Strategy can be challenging in many ways:

  • It’s difficult to access senior people responsible for strategy;
  • Strategy information is often not well deployed through an organisation;
  • To understand strategy requires a broad view of business, many project teams suffer from a silo focus on functions or technologies.

Ensuring you understand corporate documentation on strategy including annual reports or internal communication packs are a good start, but ideally you want to understand the nuances that senior management are focussed on.


Anthropology is a huge topic of it’s own. Why do people behave the way they do? This requires developing observational skills and experience of how to intervene in various situations.

As a starting point just being aware that people are a factor and taking this into consideration can make a difference.

Project management – where’s the people advice?

My path into project management will be somewhat familiar to many. I spent three years working as a systems / business analyst before being given the opportunity to manage my first project. I was both excited and scared about the prospect of becoming a project manager.

I had enough experience to know how projects worked. I had worked under good project managers and because of this I felt comfortable with the tools and methods. I had a solid understanding of how to:

  • Scope a piece of work;
  • Estimate effort and put together a team;
  • Create a work breakdown structure and draft a plan;
  • Plan and execute the stages of a systems development lifecycle;
  • etc.

Despite this, I was nervous about taking on my first project. The root cause of my nerves came down to the dependency on people. When we work as an analyst we can rely on technical skills and effort to deliver our work successfully. When we switch to project management it’s a completely new paradigm, we suddenly need to rely on a disparate team of individuals for success.

This is not the same as becoming a team leader. As a team leader we have levers such as career management, performance reviews etc. to build a relationship with our team. A project manager is in a unique position of relying on a team of people, while often having very little power over them.

Not all of the individuals on our project team will care about project success. Some may even be against it.

The question that none of the project methodologies would answer

As I started leading projects one challenge came up over and over again, and it appears very simple:

How can I get people that don’t work for me to do things?

I read the PMBOK from PMI, I read websites on Prince2, I studied my employer’s internal project management methods, I read our software vendors methods, I read various books, including the ‘project management for dummies‘; which I thought was very good.

(books / versions shown only for illustrative purposes)

In my opinion none could provide a satisfactory answer through the methods and tools listed. To illustrate the point; a google search for the PMBOK contents shows the 6th version of it as covering:

  • Section 1: Introduction
  • Section 2: The Environment in Which Projects Operate
  • Section 3: The Role of the Project Manager
  • Section 4: Project Integration Management
  • Section 5: Project Scope Management
  • Section 6: Project Schedule Management
  • Section 7: Project Cost Management
  • Section 8: Project Quality Management
  • Section 9: Project Resource Management
  • Section 10: Project Communications Management
  • Section 11: Project Risk Management
  • Section 12: Project Procurement Management
  • Section 13: Project Stakeholder Management

The titles alone show a lack of focus on strategy and people. I realise some aspects of strategy and people are considered in the sections above, but it’s not enough. I would propose chapter one should be about strategy and chapter two should be about people.

Scope, time, cost and quality relationship

An often cited relationship in project management is scope, time, cost and quality:

The idea being that the relationship between the the four are tied. For example if you have a delay (a problem with time), you may be able to reduce scope, to still hit your deadline. Or alternatively bring on extra resources (increase cost).

If you observe projects in practice I think this simply isn’t true. There are more factors that influence the relationship between these things. As a starting point I would re-draw it like this:

The way you work with people has a relationship with scope, time, cost and quality. If you use people well (with the same no. of resources and hence cost) you can deliver more work, faster and with higher quality.

If you have a clear understanding of the strategy your project supports; including the nuances of what the stakeholder wants, you can focus more accurately and provide better quality given the same scope, time and cost. It could be argued that ‘directing focus’ equates to managing scope, but I think this is too nuanced to be bundled in with scope management.

People problems – diving deeper

So far, I have spoken in conceptual terms. I’d like to go into some detail based on my own experience. In one of my first projects part of my team consisted of peers. From the first team meeting my peers decided to challenge me on various aspects of the project approach. Understandably the mindset of some team members at the start of a project might be, “why do we have to work for this person?”.

Dealing with resistance is a very common issue. If a project simply had to execute it’s planned deliverables the effort would be vastly less than what it often takes to complete a project. A huge amount of time and effort is spent on managing resistance, and justifying actions etc.

The starting point is to consider the source of resistence:

  • Team members don’t think project managers are experienced enough;
  • Team members don’t like project managers telling them what to do;
  • Team members simply don’t like their project manager;
  • Team members think they are better qualified to be the PM;
  • Team members think that the project is a distraction to their work;
  • Team members are too busy;
  • Team members believe the project puts their job or position at risk;
  • Team members have managers who don’t support the project;
  • (the list goes on…)

Often; if you make a list of resistance affecting your own project, you will find very few relate to a lack of capacity. Many are emotional. Some are related to fear; which can be founded or unfounded. Some are political.

Regardless of the reason for resistance the challenge remains the same:

How do we get people to do something they don’t want to do?

What the methods say

Popular project management methodologies tend to try to deal with getting things done through estimation, planning and monitoring:

  • Build a plan;
  • Regularly monitor progress on the plan;
  • Report status and escalate tasks that are behind schedule.

This can help organise work and ensure it’s clearly convey what is required. However it’s not always effective at making sure work is done on time and with quality. Examples:

Tracking task completion – assume there are tasks such as ‘map process’, ‘write development spec’, ‘build development’, ‘run integrated test’. Some project managers will regularly ask task owners for progress updates in the form of percentage complete. The percentages that are collected will often be inaccurate. There may be task owners who work diligently to provide accurate progress reports, but there will also be task owners that provide false percentages and leave work until the last minute. With larger projects, there isn’t time to micro-manage and check honesty. This can lead to finding delays only at the deadline which can have a critical path impact and cause a project delay. I recommend not to allow percentage completion tracking on tasks, consider them either done or not done. Keep pressure on for completion prior to deadlines.

Reporting status and escalating tasks which are behind schedule – another way to get tasks done is through escalation. There are a number of issues with this:

  • By the time escalation can happen there is already a delay;
  • On bigger programs sponsors and stakeholders can be demanding; in some circumstances project managers are expected to not escalate things to management;
  • In some cases the task owner that should be escalated may be connected to a stakeholder who is unsupportive of the project. Escalating can cause relationship levels at the executive level.

A people orientated approach

This is why I recommend a ‘people’ focussed approach. This is not rocket science. We all have skills in dealing with people, we learn these naturally as we grow and live. We just need to be conscious of people as a focal point and apply some consideration to them in our project management. A simple process might look like this:

1) Identify problem people

I suggest spending some time to think through the project organisation and identify people who may cause a problem. Make sure to consider the full time project team members as well as the extended team composing of stakeholders, operations representatives and third parties etc.

As you think through this categorise people into the type of problems they have or might cause. The list from “people problems – diving deeper” above could be a starting point.

In stakeholder analysis there are many approaches to consider stakeholders and categorise them as either “for” or “against” the program, these methods can be applied to the wider team.

2) Understand their perspective

The next step is to put yourself in their shoes. Often you may find that their ‘problem attitude’ is warranted. Putting yourself in other people’s shoes is one of the most useful business skills, this will help you tailor your presentations and discussions to gain buy-in and resolve conflict in many situations.

Questions to consider include:

  • What’s their reputation in the organisation?;
  • Are they difficult in general, but they deliver, or do they fail to deliver?;
  • Are they a peer, how do they feel about being ‘on your project’;
  • Career – are they happy in their role, are they trying to move out of it;
  • Are there any non-work items that might impact their contribution to the project (take care with respect to human resource sensitivities).

Plan an intervention

After you’ve identified people that need attention and understood what is motivating them you can start to think about how to intervent.

In the personal example I gave above I had peers who were uncomfortable working under my project leadership, my approach was to:

  • Subtly message that I don’t see it as a leader/subordinate relationship, I see them as an equal and I need/value their input;
  • Giving them space to do their own work (where I trusted their capability);
  • Showing my value by helping them resolve any problems they were having, or making minor adjustments to the plan to help them out.

There are different views on the role of project managers. I’m sometimes surprised by some project managers who seem to think they are the most important person and should be served by their team. I personally believe project managers are there to help the team succeed.

Another example. As a management consultant I was often hired by senior stakeholders, but working with people more junior staff. Many people have a bad impression of consultants or feel that consultants are there to steal their job. I’ve been in this position many times. On one project with a multinational financial services organisation, we were designing a new pan-European business unit and accompany technical architecture. We had one highly skilled technical architect from the client’s UK firm. His initial approach to joining the project was to disagree with and complain about everything. To leverage his skills and remove the conflict I asked him to take a lead role in facilitating the architecture for Europe. This allowed the project to get the benefit of his expertise while still providing our ideas and also allowed him to signal to his management that he was making a valued contribution.

In general I’ve found that with problem people the following actions can work:

  • Get them on-board with the conceptual objective of the project;
  • Build mutual respect;
  • Give them space;
  • Bring them more on-board, give them bigger responsibilities;
  • Help them with their challenges/issues in their own tasks;
  • In the worst-case scenario de-scope or limit the impact of the project on their areas, to make success possible even without their cooperation.

Monitor and adjust

As you work through the project continue to monitor resources that are coming on or going out of the project as well as your existing team members.

Focus on strategy alignment

Project managers may not be involved in strategy development. This means they may never see a study or business case that led to the project they are being asked to manage. This can lead to project managers having only a rudimentary understanding of the objective.

This doesn’t empower the project manager to make nuanced decisions on where to focus effort, what risks to accept and other similar topics.

Example – project managing an ERP and CRM system upgrade during a period of high organic growth

A business is upgrading their ERP and CRM systems. The existing systems are:

  • Technically slow;
  • Have a number of problems with effort-intensive workarounds in place;
  • Do not feature modern capabilities.

This is a must-do project to provide stability for operations and reduce effort spent on manual workarounds.

In parallel the company is experiencing strong organic growth. They are currently hiring in the sales and customer service areas, creating new teams and opening new sales offices. Management are also considering small acquisitions.

The program managers for the ERP and CRM systems may have written objectives summarised to:

  • Replace the existing system without out any negative impact to the business;
  • Resolve workarounds caused by existing systems;
  • Deliver new features to help modernise sales and customer service processes.

If the program manager only considers their own work and has limited interaction with the executive team, they may take a very rigurous and comprehensive approach. They may plan to push the organisation to do everything in it’s power to maximise the benefit of the new ERP and CRM system.

But is this the right approach?

While this is excellent in terms of getting the highest value out of the new systems, this approach could require significant effort requirements from operations and be significantly disruptive.

The result of this could be an impact on how well operations are maximising the value from the growth period they are in. The project may distract them from driving sales, on-boarding new hires and setting up the new teams.

The value gained from the ERP and CRM initiative may never pay for the potential value lost in focussing on business as usual in a growth phase.

Further, in this scenario where we have multiple things happening at once – there is a risk of over-loading people, which can lead to key employees burning out or leaving.

Note that it’s typically a major problem in strategy projects that executive committees will take on too many change initiatives. Therefore I believe it’s critical for project managers to know where their project sit’s on the list of priorities so they can resolve conflicts with other initiatives in the best interest of the complete organisation.

If the program manager for ERP and CRM has a good relationship with the executive and a good understanding of the strategic priorities they can steer their program to best help the organisation. Things to consider:

  • What’s the priority between the different elements of driving growth, vs the various projects underway;
  • What’s nice to have vs. must have;
  • For the resolution of workarounds – what is the value of solving each problem?
  • For the enhancements – what is the value of each enhancement?

With a good understanding of all of the work underway in an organisation and how it connects to strategy, a project manager can minutely adjust their scoping, phasing and solution to better support the overall priorities of the business.

This could be as simple as not fully training teams on enhancements during a busy period and activating them later on a schedule over a number of months.

Example – building an offshore shared service centre

Consider a company is setting out to open an offshore shared service centre. A high level business case was created. It defined an offshore location. It also defined a rough order and timeline for the transition of different teams. The program managers start the work based on this plan.

The program manager creates a detailed plan to execute. Whilst moving into the detail a number of issues can arise which make certain parts of the original plan difficult to execute. The project manager can move ahead with brute force and try to execute in line with the business case.

However, if the project manager makes the effort to understand the principles behind the business case and the perspective of the executive committee they may find factors that open up more options, such as:

  • In years 1 – 3 cash flow or cost is not a concern for the organisation, but they envisage a downturn on year 4 and onwards;
  • Quality is critical for the organisation, any loss in service levels is unacceptable;
  • The shared services have to be scalable to support long term growth;
  • Because the business is currently doing well, they should avoid any major disruption in the current period.

With this perspective it starts to become clear that the order of transition of teams is not as important in year 1 and 2. The quality of the platform and the stability of the transitions is important for the long term. With this strategic background the project manager can better plan alternate transition scenarios to avoid issues and present options to the stakeholders.

Common mistakes in project management

To make this article useful I’d like to cover a few of the most common mistakes I’ve seen in recent years and some tips. In keeping with the theme, these are mostly connected to what project managers focus on and how they deal with people.

Death by project admin

A good PMO should be somewhat invisible. All project quality measures should improve with a reduction in management effort. One of the most common mistakes I see being made in project management is using what I call a ‘heavy’ PMO. By this I mean:

  • Using complex and long-winded formats of project templates and tools;
  • Taking a highly theoretical approach and following e.g. PMI by the book;
  • Long meetings with a large number of attendees;
  • Focussing more on the project management method that the project work;
  • Complex review and approval;
  • Overstaffing – there are studies that show as teams get bigger they can do less.

I’ve seen teams where for every 1 person doing an actual project task there are 5 people doing ‘project methodology’ work – writing updates, hosting meetings etc.

I believe that certain aspect of agile and scrum are useful in minimising project admin and re-focussing on project tasks.

Using project tools as communication tools

Long-winded project initiation documents, GANTT charts and detailed issue logs are not communication tools. They shouldn’t be shared widely. They are planning tools for project managers and PMO.

For communication use simple, clear and targetted messages. If you want to present a project plan to stakeholders you can share a summary of the critical path elements. To illustrate effort and complexity you can reports statistics on the total no. of tasks planned, in progress, completed etc. they don’t need to see the actual list.

Using project management tools as communication formats can create an environment of ‘complexity’ and ‘stress’ by pushing too much content to people.

Inserting PMO between leadership and teams

In the past I set up a PMO to help the leadership of an organisation manage 7 different programs.

One of the mistakes the PMO lead made was with the handling of the weekly review with leadership. The approach they took was to gather the status information from 7 programs and then present that to the leadership. This was a disaster for several reasons:

  • The PMO trying to understand and repeat details of 7 programs each week is wasted effort (remembering the are not domain experts this is very hard);
  • The PMO couldn’t answer questions, or commit to actions from the leadership;
  • The individual programs do not get direct access to ask the leadership questions or to get direct instructions/context from the leadership.

In this case, the PMO should have been the chair of the process and the meeting. The should ensure each program brings an appropriate update and presents it and they should make sure questions and follow up actions are managed.

I recommend trying to create a flat project structure where even the most junior resources can access messages from senior stakeholdres. Of course this has to be expertly facilitated.


Project teams often communicate either incorrectly or not enough.

It’s important to keep everyone involved or impacted by a project appropriately briefed. A few suggestions:

  • For stakeholders or extended stakeholders an initial briefing consisting of 1-2 slides clarifying the objective, the key dates and the impact to them;
  • For the full team a weekly summary of tasks due that week;
  • For intensive projects or phases within a project a daily standing scrum can be an effective way to keep communication flowing in an efficient amount of time.

Frequent meetings can be important for certain projects, but they shouldn’t take a lot of time.

I recommend avoiding situations where large groups of people are sitting working through Gantt charts or excel lists together, this can be time-consuming and de-energizing etc. Try to limit attendance to meetings to those that need to be there.

Document and confirm all actions and agreements

At some point in your career as a project manager, things are going to go wrong. If you are unlucky it will be something serious. Not everyone plays nicely in business and you should be ready to prove your approach was diligent and your actions correct.

  • Document all actions and agreements (concisely);
  • If verbal agreements occur follow up with a brief e-mail to confirm;
  • Carefully archive all key agenda, minutes, e-mails etc. so you can access when needed;
  • Think like an auditor. Make sure to ask your stakeholders or peers or whoever is best for feedback on plans, risk lists etc. make sure people had an opportunity to input. This removes the opportunity to blame.

I don’t advise playing in ‘politics’ but I do advise protecting yourself from ‘politics’.

Too many resources

When projects run into trouble extra resources are often onboarded. It’s important to remember that there is an optimum effective team size. It’s also important to recognise that resources w/out the correct expertise and experience may slow projects down. Project progress is normally limited to a certain number of people that are “bottlenecks”. Make sure to identify those areas early. They are normally either in the most complex technical part of a new product or the busiest function/team.

Change management – avoiding one danger of using change managers

As a final topic I’d like to address change management.

Change management is often connected to strategy, people and communications. I believe that one of the reasons change management has become popular is due to the gap in skills displayed by project managers.

Change managers can be excellent and can have a very important role in a project, but when mis-used they can cause a lot of problems for a project.

The situation I would urge organisations to avoid is adding an extra level of change management between project managers and stakeholders this can result in a lot of extra effort for project teams and also a reduction in the amount of information they receive from stakeholders.

On more than one occassion I’ve seen the following happen:

  • A change manager joins a project team and immediately requires a lot of time from the project manager and other team members to educate them;
  • The change managers work directly with senior stakeholders, sponsors, and business leads and add an extra layer between those people and the project team reducing communication and clarity in a critical area;
  • Sometimes the approach taken by change management can help make people feel good but have no solid benefit in terms of what the project delivers.

Change managers should not replace what the project manager should be doing. It’s critical that project managers lead stakeholder involvement and communication, change managers can assist and consult, but should not be another level in the organisation chart hierarchy.

Learning resources to improve people management skills for project managers

Prosci ADKAR

I’d recommend applying tools like ADKAR to your project

ADKAR is often used with management teams, stakeholders or customers to check if they understand and can contribute positively to change. The diagram shows the ideal timing to apply ADKAR steps, but you can start with them at any time.

I highly recommend using Awareness and Desire. You can use these to run interviews, surveys, brainstorming sessions with your team. I recommend using them with all team members, not just management.

In awareness, you can investigate how much your team really understands what you are trying to do. This can be excellent in helping refine the focus of your team members to what is really important.

With desire, you can easily help team members find out what is in it for them. You can also find out why they might not be behind the project.


I’d recommend picking up some books on leading change, influence, persuasion. There are lots of great titles amongst the lists of top business books.

I’d also recommend that where possible try to identify a role model. Someone who can lead work effectively with people. This doesn’t need to be an active coaching relationship. A lot can be learned just from considered observation.

Final thoughts

What do you focus on when you working as a project manager? What do you see as missing skills or issues that come up repeatedly on projects?

Articles Technology


Recent years have seen a resurgence in large organisations taking on major SAP upgrades with the relatively new SAP business suite 4 HANA (S/4HANA) collection of applications. But what exactly is HANA? and what is S/4HANA? How is implementing or upgrading to it different from the R/3 upgrades that were significant programs for many organizations over the last few decades?

As SAPs core products have advanced and their portfolio has broadened it’s become difficult to understand how it all fits together. In recent years I’ve met team members and stakeholders working on SAP programs who struggled to articulate the basics of HANA. SAP projects can be complex and challenging partly due to this lack of knowledge. SAP have been addressing this by improving their communications and training, but understanding HANA can still be quite a lot to navigate.

In this article, I’ll briefly explain the history of SAP and hence the context that led to HANA as well as clarifying the technical concepts behind HANA, why they are important, and how the business application has changed.

A brief history of SAP and ERP

SAP has a large portfolio of applications. If we stick to the main enterprise resource planning products we can abbreviate the history of the company to six key versions, roughly a major iteration each decade.


Let’s start from the beginning.

SAP was founded by a number of ex-IBM employees in the early 1970s. Their first system was called RF (real-time financials) and was later re-named R/1. SAPs product strategy was based on three main concepts:

  • Provide a standardised ‘of the shelf solution’ – in the days when many companies were building their own applications from scratch SAPs plan was to build a software product that worked for many companies only with minor configuration;
  • Real-time – information entered into the application is available across the entire application in real-time;
  • Integrated – the same data is shared across multiple functional parts of the system reducing the need for redundant data entry.

What exactly does ‘real time integrated’ mean?

Consider an example from manufacturing. Raw materials are converted to finished products and sold and shipped to a customer. This process involves many departments; procurement, warehousing, manufacturing, finance, sales etc. If we consider only a part of this; the receiving of raw materials from a supplier, two activities need to occur.

Prior to ERP, these activities may have been done separately, for example, warehouse management may have updated their inventory list at the end of the day and then sent a copy of the information for finance to update the accounts. Throughout the day inventory and financial information would not have been up to date or aligned. And the effort has been wasted entering the same data twice.

With ERP,  When warehousing update inventory, the accounting records are updated automatically in real-time. Under the hood, ERP has a lot of connections across different tables that keep information in sync for different functions and teams.

Once we understand this we understand the value of ERP systems and why they became so popular. We can start to imagine how complex they are as they connect processes and data across the entire enterprise. Take the simple example above and imagine how the same logic could be applied across sales, marketing, production etc.


Moving onto 1979 R/2 was released.

The switch from R/1 to R/2 was a more subtle evolution from a technical perspective with increases in the core functionality as SAP started to increase their customer base.

I can’t write too much about R/1 and R/2. When I started my career in an IT team in 2000 R/2 was on the way out. I was trained in using AS/400 mainframe and R/2 but I had only a short time to use it. In fact, most of my experience of R/2 is extracting data from it to cleanse before loading to R/3!


Moving onto the 90s and R/3.

The switch from R/2 to R/3 was significant with a number of major changes:

  • R/1 and R/2 are classed as mainframe systems and R/3 as a client/server system. Skipping the technicalities this allowed for:
    • A fuller ‘graphical user interface’ on desktops (i.e. windows desktops or laptops);
    • Cheaper, easier to scale, and more flexible set up the server-side (note: some complex debate exists on some of these).
  • The shift from R/2 to R/3 and the ongoing development of R/3 through the 90s also represented significant expansion in the business processes covered.

R/2 and R/3 are very different systems. To switch from one system to another you need to extract and transform data before loading to R/3, you also have to map all processes. In my experience switching from R/2 to R/3 was similar to switching from a non-SAP system to R/3. In the 2000s I managed several upgrades from R/2 to R/3 as well as upgrades from mainframe systems like BAAN and the approach and work involved was similar.

When talking about R/3 it’s also important to consider scale and globalisation. Mainframe systems were typically implemented for a single country or business unit. The cheaper more scalable architecture of R/3 provided an opportunity to implement one R/3 system covering an organisations business across an entire region or the world. This is important as it’s one of the factors which lead to bigger data volumes and more performance challenges.

R/3 was evolving year by year as a complex, integrated system that was being used in large organisations on a global scale. This set’s the scene for what is to come with HANA.

A note on the R/2 vs. R/3 look and feel

For a simple illustration of how different R/2 and R/3 are we can look at a couple of screens.

An R/2 terminal screen
An R/3 graphical user interface screen
  • R/2 has a very simple interface where function keys and codes are used to navigate between fields;
  • R/3 includes menus, tabs, buttons, ‘help lookups’ etc.

We will see that there is also a significant jump in how SAP looks and feels between R/3 and S/4HANA.

A note on R/3 process scope

This is a diagram that anyone that worked on R/3 will fondly remember, it outlines the different modules or ‘functional areas’ covered by R/3.

While ERP and R/3 may seem complex; and it is, all it does is record business activities by entering transactions in a system and having the information about what happened stored in a database. It then lets you view and adjust the information to manage your enterprise. Here are some simple examples for a few of the modules shown above:

  • FI – finance
    • Record periodic accruals.
  • CO – controlling
    • Record / view expenditure against a department
  • SD – sales and distribution.
    • Record a sales order for a sale to a client
  • PP – production planning.
    • Plan a production schedule
  • HR – human resources
    • Pay employees.

2000 – 2015: / ERP

When we come to 2000 the branding becomes a little confusing.

There were a number of key focus areas and we saw R/3 being referred to as and also ERP (technically ECC). Noteworthy focusses were:

  • The emergence of web technologies and the need for ERP to be able to connect on a B2B or B2C basis via the internet, was used as a brand and various integration technologies were available.
  • An increasing number of ‘add on’ products for data analysis;
  • Acquisition of and integration of niche competitor software into the SAP landscape.

A note on data analysis

R/2 and R/3 are technically optimised as systems to record data. They are not optimised to analyse data. The late 90s saw the release of the first business warehouse system (BW). This system is technically architected to analyse data. Organisations would use ERP to record data and carry out simple real-time reporting and then send data in daily batches to BW for more complex analysis. I’ll come back to this with an illustration later.

Acquiring competitors

During this period there was a boom in niche software providers, particularly in areas such as data analytics. SAP took the opportunity to acquire some leading competitors to cover areas where their applications were weaker, for example, this covered:

  • Analytics, planning & reporting – e.g. Outlooksoft, Business Objects
  • User experience & process execution in niche process areas – e.g. Successfactors, Concur, Ariba.

What’s interesting to note is that with the addition of business warehouse the SAP solution was no longer a real-time integrated architecture.

Furthermore, the architecture for many companies was becoming somewhat convoluted with many different applications from different providers. This in fact leads to a lot more solutions in areas like interfacing and master data management.

Business suite

During the 2000s the number of processes covered by the R/3 or ERP was continuously increased, in addition to that a number of additional applications were launched to provide more advanced capabilities in certain areas. SAP started to package a number of these together in the late 90s under the name, “business suite”. The main components of business suite are:

  • ERP (enterprise resource planning):
    • Basically the evolution of R/3 – the core of business suite including financials, human capital management, operations, corporate services etc.
  • CRM (customer relationship management):
    • Sales, marketing, and service.
  • SCM (supply chain management):
    • Procurement networks, production networks, distribution networks, planning, organisation and execution of supply processes.
  • PLM (product lifecycle management):
    • Product ideation to production.
  • SRM (supplier relationship management):
    • Procurement for materials, goods and services. Requirements determination to ordering to payment.

A note on OLAP vs. OLTP

As mentioned a major issue that existed with R/3 was the inability to handle reporting for increasing data volumes, especially with the growing demand for quick analysis. R/3 as a system is not designed to read data quickly. This led to the development of stand-alone systems such as SAPs business warehouse that were optimised to read data. The following terms were used to describe these two different types of systems:

  • OLTP – online transaction processing (e.g. R/3)
  • OLAP – online analytical processing (e.g. BW)

As a result of this large organisations often ended up with systems landscapes that include multiple OLTP systems and multiple OLAP systems all connected together.

And this is before we even consider topics such as web applications, big data etc.!

Increasing complexity

Prior to the launch of HANA it’s useful to reflect on where the SAP portfolio was:

  • The core of ERP had been developed over decades with a continuing increase in the volume and complexity of processes covered;
  • Multiple industry-specific solutions were also available;
  • Requirements for many geographies were covered;
  • There was a split between applications for recording transactions (OLTP) and carrying out simple reporting and applications for information analysis (OLAP). Real-time integration was not present across the entire range of applications;
  • The product portfolio became huge, in part due to multiple new products being developed by SAP and in part by a large number of acquisitions;
  • Major advancements in the standards and approach to integration and web technologies over the years.

Altogether the complexity of business systems landscapes has been massively increasing since the mainframe days. I think this is a topic which is not addressed as much as it should within architecture plans, while we should embrace new technologies we should also rationalise old technologies.

This brings us to the 2010s where part of the focus from SAP is on reducing the complexity of the core product, while also continuing to advance in new technologies. HANA plays a significant role in reducing complexity and bringing real-time back to include analytics capabilities.


This brings us to the question of what is S/4HANA?, it’s short for “SAP business suite 4 SAP HANA” and it’s a collection of different things. This is one of the reasons why HANA is not well understood. It can’t be correctly called either a technical upgrade or a functional enhancement, it’s a combination of the two. Furthermore, as part of a S/4HANA conversion, there are a lot of optional items. Each company needs to define its own scope for a S/4HANA conversion based on their own objectives.

In this article I’ll cover three main building blocks of S/4HANA. These are:

  • The HANA platform (or HANA database) – a new database that solves the problems faced by ERP;
  • S/4HANA (i.e. the HANA business suite) – an updated version of business suite 7 taking advantage of the benefits of the HANA platform;
  • Fiori – a new approach to UI with more focus on flexible app style development and mobile.

In this post, I’ll spend most of the remaining time explaining the HANA platform and how it impacts business suite, which I think is not commonly understood. For the business suite and Fiori I’ll give a very brief overview as these topics are quite deep and SAP has plenty of information available. Plus when looking at these topics it needs to be done piece by piece e.g. by function or UX case.

The HANA Platform

Understanding memory

To understand HANA we need a little consideration to how memory works in a computer. Bear with me, it’s not that technical!

As with many applications, ERP was designed based on what could be done at the time with the technology availabe. The main constraints were the cost of processing power and storage. The hardware limitations led to limitations in the logic of the software which led to a number of the problems that we have already discussed above.

However; considering Moore’s law, the increase in processing power and storage and reduction in hardware costs gave SAP the opportunity to re-think the architecture of ERP. This brings us to HANA.

HANA is the term used to refer to a new database whose development was led by one of the founders of SAP. HANA stands for:

  • Hasso’s New Architecture ;
    • (Hasso Plattner is one of the five founders of SAP);
  • or alternatively, “High-Performance Analytical Application”.

You can learn about HANA from Hasso himself on the open learning platform from the Hasso Plattner Institute for software systems engineering (note this is very technical, only for people who love databases I guess!):

There are three key features that allow the HANA platform to solve the problems ERP and BI were facing, these are:

  1. In-memory computing;
  2. Columnar database managemnet & data compression;
  3. Parallel processing.

We will take a look at the first two topics to understand better what HANA is. The third; parallel processing, is a fairly common concept where modern computers can use multiple processors simultaneously on an operation.

How memory works

To start the explanation of how HANA uses memory, let’s consider the example of a regular desktop computer. Memory can be categorised into 3 types:

  • Auxiliary memory: the largest and cheapest memory. Either magnetic disk or solid state drive. Data is retained when the power is off. To write or read data is extremely slow;
  • Main memory: mostly made up of RAM, more expensive, but much faster than auxiliary memory. Data is lost when power is off.
  • Cache memory: A small amount of very fast memory close to the CPU that stores data the CPU is currently using.

The biggest factor in determining the speed a computer can process is how quickly it can read and write to memory. If the processor needs to access auxiliary memory then the process will be very slow.

R/3 doesn’t run on a desktop, it runs on a server. But don’t be concerned about IT terminology a server is the just a computer in the same way a desktop is a computer.

So we can consider R/3 ERP as a big computer, with massive data volumes, one of the main reasons it can’t be used for advanced data analysis is the time it takes to retrieve data from auxiliary memory.

In memory computing with HANA

As technology becomes more advanced and component prices go down, main memory is now available at a cost where it can be used for the volume of storage that was previously was only possible to store in auxiliary memory.

To directly quote SAP, “SAP HANA runs on multi-core CPUs with fast communication between processor cores, and containing terabytes of main memory. With SAP HANA, all data is available in main memory, which avoids the performance penalty of disk I/O (i.e. read / write to auxiliary memory).

In plain English the complete dataset within ERP is stored in what we think of as ‘RAM’ on our desktops or laptops and is easily accessible by the processor.

With HANA we don’t need auxillary memory for day to day operations as shown below. However note that it is used for back up / disaster recovery, for example in the case of power being lost.

Columnar data store with HANA

In addition to in-memory, HANA applies database management methods that are much more efficient at compressing data. And the more compressed data can be the faster the system can run. 

Consider the table below. Traditionally an OLTP type database will hold data in a row store. If you compare the row store with an alternative method; the column store, you will quickly realise that for the column store a lot of values may be duplicated side by side. Intuitively we can see a columnar store may be much easier to compress.

Compression is a fairly broad and technical topic, but simply imagine a column for ‘city’ in a table of addresses, we will have hundreds if not thousands of entries of e.g. ‘London’, if that’s the case we don’t need to store London every time, we can instead store the range of rows that have London as a city, this means if there is a query about London, the application does not need to work through every row to get the results.

More information:

Taking into account ‘in-memory’ design with ‘columnar’ store, the HANA platform provides a database that can operate hugely faster than the database options used in R/3 or business suite 7 or any traditional OLTP system. This is quite a big deal:

  • We no longer need to separate OLTP and OLAP applications to different databases/applications. A single HANA database and application can do both types of operations effectively. This is an opportunity to massively simplify the hardware, technical architecture and data architecture.
  • We can simplify the business suite applications. One example of this: Because OLTP systems were generally slow at reading and analysing data there are often many subtotals and totals tables that are updated when transactions are processed. These tables along with a lot of complexity can be simplified or removed.

SAP Business suite 4 HANA – simplification items

Recall we said there are three main components of S/4HANA

Now that we covered the HANA platform we can look at business suite. The business suite present in S/4HANA is essentially an updated version of business suite 7.

We could say that the conversion from say R/3 to S/4HANA is a technical upgrade from a database perspective. But from an application perspective, there are further changes and enhancements many of which are enabled by the database conversion.

A big part of a S/4HANA implementation is understanding which simplifications and enhancements are available and which you would like to implement. Not all simplifications are mandatory. And each simplification or enhancement has its own unique impact on the process, data etc. 

SAP provides a simplification list for each HANA release. The current S/4HANA version is 1909 and the list is here:

I won’t go through these in detail, it’s a huge list. One key note worth mentioning is that the majority of simplifications are within the finance and logistics areas. Some examples from finance:

  • The universal journal (major simplification to the tables/ledgers and hence reporting in the finance area);
  • Changes to transaction codes (removal of old / introduction of new);
  • NewGL (an updated version of GL which was available prior to S/4HANA is implemented as part of S/4HANA);
  • New Asset Accounting;
  • etc.

For finance the simplification journey started back with ERP (ECC 6.0), at this time NewGL was launched which provided a significant simplification to the way financials and controlling worked:

  • Simplifying the no. of internal ledgers (e.g. removal of FICO reconciliation);
  • Adding leading / non-leading ledger functionality for multiple valuation requirements;
  • Extending the GL code-block e.g. for IFRS segmentation requirements.

NewGL provided a starting point for further simplifications enabled by HANA.


Fiori is SAPs new approach to user interface design.

One of the main objectives of Fiori is to allow developers to quickly create ‘apps’ as an interface for specific activities or tasks within SAP.

These apps can feature improved visual design, role-specific actions and be adaptable between desktop, tablet and mobile etc.

Fiori starts from the launchpad where different apps can be placed as tiles along with global elements such as user personalisation options, search and notification.

Image source:

This provides a significant step forward in the ability to customise the interface to specific roles and improve the user experience. It’s easy to see how having key figures and activities available at a glance could have a number of benefits.

Fiori comes with a number of SAP provided apps and organisations can also develop their own apps.

For a S/4HANA conversion how much effort should be placed on Fiori? How many apps will be deployed? How much time will be spent on optimising launchpads for specific roles?

Implementation considerations

As with any ERP implementation or upgrade, a conversion to S/4HANA will be a complex project. SAP provide free training available on open sap:

In addition to training there is a recommended roadmap. Between these it’s possible to plan out all required activities.

I’d like to highlight three areas of focus:

1. Business case development

As we’ve seen an S/4 conversion is like a hybrid between a technical upgrade and an introduction of new business features. With this in mind, what is the business case behind the investment? It could range between:

  • It’s a ‘must-do’ program to ensure we stay on the latest version, but we want to minimise cost and effort;
  • It’s an opportunity to simplify our IT architecture, access as many of the business suite enhancements as possible and implement Fiori for all our users with our own apps. We want to invest a lot of time and effort and improve the way we work.

When considering the benefits, it’s critical to ensure that experts who understand the state of the current systems and current ways of working are involved.

In my experience the pre-sales and business case activities are often limited to senior management and architects, this can lead to an overestimation of the benefits that the users of the system will receive.

I’d recommend validating the business case with functional and technical experts. This may lead to an adjustment of the plan for improved scope, more refined focus and a more realistic project plan.

2. Preparation is critical

The recommendations and learnings which I’ve applied to R/3 and ERP upgrades also apply to S/4HANA. The biggest of these is related to preparation. Serious work should start 3-6 months prior to the start of the project proper. The work that should start early includes topics such as:

  • Master data cleansing
  • Transaction data cleansing (i.e. aging analysis)
  • Ensuring that existing processes are understood and documented
  • Ensuring that existing configuration is understood and documented
  • Ensuring issues and problems are understood and documented
  • Ensuring custom developments are understood and documented
  • Ensuring the right resources are available for the project
  • Ensuring the biggest pain points within the current process / system steps are understood and have been included in the scope consideration as part of the business case.

The majority of SAP projects run into problems in the requirements, fit-gap and testing stages because the current system and process was not well understood or considered or there were hidden issues. It’s critical to surface these early.

3. Involve the right people early

Typically a large organisation may run a global SAP upgrade/implementation something like the below:

  1. A small team develops a business case with senior management involvement;
  2. A first project runs the upgrade for one business unit/country/region as a pilot and as part of this defines a global standard approach to the upgrade, the project is highly biased to one business unit/geography;
  3. The upgrade is then executed across different geographies/business units, they struggle with the design decisions made by the first unit;
  4. Within each individual project, the first stages start with the involvement of a small number of people and as they progress an ever-increasing number of people up to user acceptance and training activities with the full teams.

With this approach the best experts and ‘real knowledge’ may not see the planned solution until user acceptance testing occurs. By this time it’s too late to change anything without major project delays. I’ve always disliked how user acceptance testing is included in traditional IT projects, it’s never a chance to accept a system works acceptably for a business, it’s often more an argument on whether the system works according to what was agreed and written down in previous project stages.

When you plan your project, look at the team staffing, from the very first phases:

  • How many of the team members have worked in your business operations?
  • How many of your team members are middle managers?
  • How many of your team members are external contractors or consultants that don’t know the details of your operations?

Free up your at least one operational experts from each in scope function and ensure they are involved from the start. Make sure the profile of this person is someone that will continuously socialise and gain feedback and input from their peers.

Final thoughts

There are still a lot of topics to be considered such as the details of the simplification list for each function and the impact on ‘add on’ systems e.g. analytics. However hopefully understanding what HANA and S/4HANA are from an evolutionary and technical perspective makes it easier to figure out how it all fits together.

What aspects of understanding and planning SAP related work do you find most challenging?

Articles Technology

Blockchain: how it works

Blockchain has been one of the most hyped technologies of the last decade thanks to the massive profits that were made from bitcoin and other cryptocurrencies. Despite a lot of of the smoke and mirrors that surrounds blockchain it is a fascinating technology.

This post is based on a presentation I gave to my fellow international coleagues in Japan in 2019. I was motivated to create and deliver this presentation to combat a lot of the misinformation around blockchain solutions and when they should and shouldn’t be considered. To understand the benefits and use cases for blockchain it’s useful to understand how it works.

This requires a brief walk through of various concepts including mathematics and cryptography which have a big role in making blockchain possible. Bear with me as we work through some of the concepts it will hopefully all make sense by the end of the post.

The big idea

A decentralised ledger

Blockchain can be considered simply as a ledger, what makes it unique is it’s decentralised nature. Blockchain has various key properites:

  • It’s a ledger records transactions between parties, but as we will see we can track anything – more than a traditional ledger
  • Transactions are verified using cryptography
  • Data is decentralized; transactions are stored and verified across all computers in the network (nodes)
  • Therefore no need for an intermediary (e.g. bank / data centre / government body)
  • Transparent – data is public; anyone can view transaction history
  • Immutable – unchangeable
  • Partial privacy / anonymity – while data is public it is not connected to real IDs.

Don’t worry if these don’t make sense now, we will cover them during this discussion.

A traditional ledger – bank account example

A bank account is a simple example of a ledger. However a ledger can be any book of record; shipments, patient records, insurance policies etc. Consider the properties of a traditional bank acocunt:

  • Account holder has no direct control
  • Central authorities have power over the account (the bank; government etc.)
  • Data is stored centrally; vulnerability to hardware failure or attack
  • Privacy is limited; all transactions require proof of ID / address


As with the bank account example a 3rd party acts as a central authority to control and manage the account. What if it was possible to keep a book of record without relying on a central authority, this would provid the following:

  • Architectural independence (no reliance on a single physical server)
  • Political independence (no single point of control from any organisation)
  • Logical independence (data is duplicated across many physical machines).

The potential benefits of these include:

  • Less susceptible to attack
  • Less susceptible to failure
  • Improved privacy / anonymity (partial)
  • Improved transparency (records are publically traceable and unchangeable)
  • Not susceptible to control by a single authority.

Blockchain origins

In 1991 a paper entitled “How to Time-Stamp a Digital Document” presented a major landmark in cryptography and provided the concept which would allow us to send and receive documents across a public internet with trust. This provided the foundation for blockchain.

In 2001 Satoshi Nakamoto published a paper entitled, “Bitcoin: A Peer to Peer Electronic Cash System” which outlined how blockchain works. Satoshi Nakamoto is the name used by the person or group of people who developed bitcoin, but this is not a real person.

You could read through these two papers and peice together everything we will talk about here, however it does get rather technical and I’ve included a number of diagrams in this post that I hope will make the concepts easy to grasp.

A brief history of blockchain

  • 1991 Stuart Haber and W. Scott Stornetta – 1st paper outlining the use of cryptographically secured blocks to preserve integrity of past information
  • 1993 Proof of work concept established as a countermeasure to spam / network abuse
  • 2008 Satoshi Nakamoto published famous white paper “Bitcoin: A peer-to-peer electronic cash system”
  • 2014 Ethereum – a blockchain that can be programmed and can run computation – world computer “Ethereum virtual machine”
  • 2015 Bitcoin gets serious attention
  • 2016 Bitcoin embraced by FSI
  • 2017 Blockchain named foundational technology in HBR

No need for a trusted party

One of the most noteworthy benefits of blockchain is that no central authority is required.

This means blockchain can be used for an electronic currency; as is the case with bitcoin, without any involvement from banks, central banks, or governments etc.

Cryptocurrencies market

Blockchain has multiple applications, cryptocurrencies being the original and one of the most popular use cases. For context the cryptocurrnecy market currently stands at around US $250,000,000,000. May 2020 data from

  • Cryptocurrencies:  5,500
  • Markets:  22,416
  • Market Cap:  $255,219,845,520

Cryptography as an enabler of blockchain

To understand the concept of blockchain and some of the terminology involved we need to understand some number formats used in mathematics and technology.

A 256 bit number

What do the three sets of digits above have in common?

They all represent the same number, but in three different formats.

We normally count in decimal; also known as base 10. This simply refers to counting with 10 digits; 0,1,2,3,4,5,6,7,8,9. There are two other ways of counting that are important. Binary which uses 2 digits 0 and 1. And Hexadecimal which uses 16 digits; 0-9 and A-F. Binary is used by computers and hexadecimal is commonly used as a short form to record long numbers.

A 256 bit number is simply a number that when written in binary has 256 digits. These numbers are commonly used in cryptography and blockchain as they are hard to guess.

How hard is a 256 bit number to guess?

  • They have 2256 possible combinations
  • 116,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 combinations

Because long numbers of this nature are very hard to guess they play a critical role in cryptography, unlike a 10 digit password they can’t easily be brute force guessed by computers (i.e. trying all combinations).

Basics of cryptography – can you de-crypt this?

In cryptography we apply a rule known as a cipher to a message (sometimes called plain text) to create a cipher text. For example:

A cipher was applied to a message to create the text Mjqqt, Btwqi! – can you figure out what the original message was.

In this case it’s quite simple to break. The cipher is a simple and fairly easy rule. It’s known as a Caesar Sipher; it was first used by the Roman Empire, and simply involves shifting the digits a number of spaces up or down the alphabet.

In this case we moved each digit 5 characters down the alphabet. This is also known as a map function.

About functions

  • A function takes an input and produces and output f(x) = y
  • An input is part of a whole (domain) e.g. integers, prime numbers
  • A function where N inputs produces N outputs is a map function e.g. f(1,2,3,4) = (1, 4, 9, 16)
  • A function where N inputs produces 1 output is a reduce function e.g. f(1,2,3,4) = 10
  • An example of a famous function is e = mc2

There are two key ‘cryptographic’ functions that enable blockchain, these are:

  • Hashing functions
  • Elliptic curve digital signature functions.

For general purposes the term algorithm can be considered synonymous with function, however in certain areas of maths and computer science the term may be used with slightly different meanings.

Hashing functions

A hashing algorithm creates a short ‘fingerprint number’ which can represent an arbitrary large amount of information. 

  • Use a ‘hash’ function to product an output
  • Examples are SHA256, MD5, Bcrypt, RIPEMD
  • A hash takes an input of any size (e.g. from one word to the entire works of shakespeare)
  • Produces an output of a fixed size (referred to as a digest)
  • Computational efficiency: for a given input the output should be easy to compute
  • Deterministic: for a given input, must always give the same output
  • Pre-image resistant: the output must not reveal anything about the input
  • It has to be practically impossible to reverse engineer to derive an input (one-way function)
  • Collision resistant: it must be practically impossible to find two different inputs that produce the same output.

To provide an illustration of how a hashing function works, it does something like the following:

  • Convert english letters (e.g ASCII) into 1s and 0s
  • Move the first four bits from left to right
  • Separate every other bit
  • Convert those two parts into base 10 numbers
  • Multiply the two numbers together
  • Square that number
  • Convert the numbers back to binary
  • Chop 9 bits of the right side to get exactly 16 bits
  • Convert the binary numbers back to english letters.

There are online tools that will convert any data into a hash. Here are three examples of some short, simple text fed into a SHA256 hash function:

  • No matter the size of the input, the output will always be 256 bit
  • Even minor changes to the input “Alexander” > “AleXAndeR” will dramatically and unpredictably change the output.

Elliptic curve digital signature functions

lliptic curve functions are used for digital signatures, these are used to:

  • Validate the sender of an electronic message
  • Ensure the message wasn’t tampered with

The mathematics is complex, so we will skip the mathematics on how the required numbers (known as “key pair”) are generated, but will look at how the process works in detail.

A good starting point for more detail is wikipedia:

Digital Signatures

Public and private key cryptography

We can use the elliptic curve function to create two connected numbers which are known as public private key pairs and are used in cryptography.

  • A key is used to encrypt and decrypt data
  • Symmetric cryptography: the same key encrypts / decrypts
  • Asymmetric cryptography: a different key encrypts / decrypts
  • Key-pairs are generated using the elliptic curve function
  • A key-pair is a private (aka secret key) and a public key
  • A public key is announced and known to the world
  • A private key is kept secret
  • It’s hard (practically impossible) to know someone’s private key if you know their public key
  • Order is not important, private key to encrypt and public to decrypt, or public to encrypt and private to decrypt.

Encryption for confidentiality

The following diagram shows how one party can encrypt a document, then send it across a public network, where it can be decrypted by another party. In this diagram pk represents a persons public key while sk represents their private / secret key. Each person has their own key pair that they will generate, they will only share or publish their public keys.

  • Party A has some sensitive data they want to send to party B (they want to ensure no one other than party B can see the data)
  • Party A has access to party Bs public key (everyone does)
  • Party A uses a cryptography function along with party Bs public key to convert the data to encrypted data
  • Party A publishes this encrypted data on the public internet or sends it to party B via e-mail etc.
  • This encrypted data can now only be unencrypted by party Bs private (secret) key.
  • Remember party B keeps their private (secret) key a secret, no one else has access to it.
  • Party B uses a cryprography function along with their private (secret) key to unencrypt the data.

This is a simple example of the basics of cryptography that allow us to send secure data across the internet.

However there are some problems with this data. How does party B know that the encrypted data they received came from party A? How do they know it wasn’t intercepted and modified.

This is where we need to take things further and look at a more complicated and complete example of the use of digital signatures.

Encryption using digital signatures

1) Party A encrypts data with their private key

2) Party A combines the original data with the ‘Party A private key’ encrypted data and then encrypts this package with party Bs public key.

3) The package of encrypted data is sent over the public network.

4) Decrypt the package of encrypted data with the party Bs private key to reveal the data and the ‘party A private key’ encrypted data.

5) Decrypt the ‘party A private key’ encrypted data with party As public key and then compare this data with the other data file, if they match you can trust that the encrypted data you received was sent by party A and was not tampered with or changed.

This is a little difficult to follow, it’s worth reading through a few times, it’s quite straightforward once you get used to working with key-pairs.

Generating a key-pair

  • Choose a random 256 bit number as a private key
  • Use the elliptic curve function to generate the private – public key pairs
  • Key pairs form the basis of bitcoin addresses

A fun way to geneate a 256-bit number is to use which will generate a key-pair that can be used for bitcoin.

With bitcoin your public key is your bitcoin address.

It’s critical to keep your private key secret as this will allow complete access to any bitcoin you own or in another blockchain any encrypted data etc.

Digital signatures:

  • More secure than physical signatures
  • Change with every message – altering messages even slightly completely changes the signature
  • It’s infeasible to find a valid signature if you don’t know the secret key. Therefore they cannot be forged.

Building a decentralised ledger

Consider a ledger to track transactions between 4 parties:

To manage this without a central authority there are a number of requirements:

  • We need to allow everyone to add transactions
  • We need to make sure periodic settlement is done
  • We need to decide where the data is stored and how everyone agrees on ‘the truth’

Developing a blockchain protocol

The original bitcoin paper looked at each challenge and proposed a solution

Challenge 1 – transaction trust

What stops party B adding a transaction saying that party A owes them $20?

Traditionally we trade with cash on delivery or use payment systems with some inbuilt protection e.g. credit cards or paypal.

This is where digital signatures can help:

  • A digital signature can be used to prove that the transaction is verified
  • Party A adds a digital signature to transactions in the ledger which validate they have seen them and agree with them
  • These signature must be infeasible to forge. This is where cryptography makes blockchain possible.

We can now update the first challenge in our blockchain protocol

Challenge 2 – ensuring settlement

What if party A racks up debt and refuses to settle?

Instead of solving the problem of making the parties settle, the blockchain paper raises the questions, “What if we can remove the need to settle”?

This can be done by preventing people from spending more than they take in.

1) Start by giving all participants an opening balance

2) Only allow transactions where no overspending occurs

We can update our blockchain protocol for the second challenge

An interesting note is that blockchain does not keep a running balance, each new transaction checks the complete bitcoin history.

Challenge 3 – storage and management

Without a central authority how do we manage the ledger:

  • Who hosts the storage
  • Who controls the rules of adding new transactions
  • etc.

Let everyone keep their own copy of the ledger, whenever someone has a new transaction they broadcast it out to the network.

How can you be sure that each ledger picks up every transaction that is broadcast out and in the right order?

We can update the blockchain protocol:

The final challenge we will cover deals with how the network stays aligned. For this we need to deep dive into the structure of blocks and look at how the chain is managed.

Blocks and the blockchain

Blockchain and proof of work

How do we ensure the network stays aligned?

Transactions are bundled into blocks. And those blocks are validated in a way that allows the network to reach consensus.

The method bitcoin uses to validate blocks is known as ‘proof of work’. This is made possible because of the foundational concepts discussed thus far:

  • Cryptography
  • Hash functions
  • (and computational work)

How blocks are created

In the diagram I’ve included a screenshot of the bitcoin wallet mycellium as an example of how a user might interact with a blockchain. Using mycellium a user may create a transaction; a request to send bitcoing from their address to another address. This transaction then enters the ‘mempool’ which can be considered as a waiting room for the transaction to be added to the blockchain. Miners pick transactions from the mempool and create ‘blocks’ of transactions which they then compete with other miners to validate and add to the chain.

Anatomy of a block

Using a slightly simplified block design for the purposes of illustration we can consider a block to be something like the below.

  1. Sequential ID: to ensure the same transaction / block cannot be copied
  2. Nonce (number used only once): a number the miner can vary to validate the block
  3. Data / message: the transactions in the block – normally hundreds or thousands, only a few shown for illustration purposes.
  4. Previous block hash
  5. Hash signature of the current block

Many miners build blocks in parallel

Miners are constantly picking up transactions from the mempool and competing to make a block. This is all part of a system to ensure that blocks are created with a certainy frequency and that the network will agree on the order of the blocks and hence the transactions that are contained within.

By competing to make a block we refer to a competition that miners are taking part in. Understanding this competition is key to understanding how blockchain works and where it’s weaknesses lie. This competition is called proof of work.

Proof of work

Proof of work involves using a cryptographic hash function to encode the data contained in the block into a hash.

The format of the hash is unpredictable.

So blockchain set’s a competition for miners to find a hash with a certain number of preceding zeros. This is known as a difficulty threshold.

The only way to do this is by changing the nonce. The timestamp will also change as time progresses. The miner will keep generating a hash with this info. until eventually one miner finds a hash that wins.

Recall that the data we put into a hash function changes the hash output. Therefore changing the nonce or timestamp provides an opportunity to hash the same block of transactions over and over and generate different hash outputs.

If we take a look recent blockchain blocks; which we can do via a number of online explorers:

We can see that the winning block hash start with a certain number of leading zeros; currently 19 for blockchain.

Proof of work – step by step

  • Miners hash the block data using a hashing function e.g. SHA256
  • It is likely the results of hash will not have a number of leading zeros.
  • The miners adjust the Nonce
  • The timestamp may also change
  • They will continue to run this until a hash is generated with the desired number of leading zeros.
  • The hash results of SHA256 in hexadecimal has 1664 possible combinations
  • (each digit has 16 possible combinations 0-F and there are 64 digits)
  • To be valid a hash has to meet a certain threshold of difficulty set by the network
  • As of 25th May 2020 the latest winning block hash for blockchain was:
    • 0000000000000000000bea250e982735d2d6a92bf9b21ec222e1394c4c0746f4
  • This block has 19 leading zeros – the difficulty current difficulty threshold
  • This block contained 2598 transactions
  • The probability of SHA256 returning a hash with 19 leading zeros is extremely low
  • Who ever finds a valid block is rewarded with bitcoin, the above noted winning block won 6.25 bitcoin.
  • The goal of the difficulty threshold is to control how quickly blocks are added to the chain; and hence how quickly transactions are processed
  • Miners are working on different blocks at the same time, if a block is not validated the transactions will remain in the mempool to be processed
  • Miners also get transaction fees which incentivise miners to pick certain transactions from the mempool
  • Blocks in bitcoin are limited to around 2500 transactions
  • For comparison, VISA processes around 1700 transactions per second, but is capable to handle much higher levels, so blockchain is relatively slow compared to other payment systems.

Block demo

As we talk through this section, please expirement with the excellent online demo available on

  • You can experiment with creating blocks using the ‘block’ menu item
  • You can view how the chain builds up and how changing blocks affects the chain

How blocks are connected

After a block is validated via proof of work, the hash of that block is then used as part of the construction of the next block

This means that any change to any previously validated block will make all blocks since then invalid.

The synrchonisation challenge

  • The blockchain is decentralized
  • The complete chain exists on every participating node
  • Nodes are spread all over the world
  • In the short term nodes will not be aligned due to:
    • Computer processing speeds
    • Network speeds
    • Isolated downtime
    • Failures
    • Etc.
  • The protocol deals with this by always trusting the longest chain
    • With bitcoin, around 6 blocks back is considered to be trusted
    • With 1 block per 10 mins this means blocks created around an hour ago are trusted
  • Blocks which are not accepted as part of this longest chain have their transactions returned to the mempool to be processed as part of future blocks

Consider 3 nodes participating in a blockchain network at a given moment. The recent blocks may be different, but as node c is the longest it will be trusted and node a and b will eventually synchronise with node c.

51% attacks

One of the commonly talked about weaknesses of public blockchains is the 51% attack. This is based on the computational feasibility to have a node or set of nodes that can validate new blocks on the network faster than anyone else for an extended period of time; there is no longer true consensus across the distributed network and a single party has taken control.

Recall that each transaction that enters the mempool is verified by digital signatures.

This means an attack cannot add false transactions.

However there is a way to fraudulently attack the network called the ‘double spend’ attack.

To execute the double spend attack, the attacker must be able to validate blocks faster than the rest of the network hence > 51% of computational power is needed. The first step is to take the latest block offline.

1) The attacker takes the latest block offline

2) The attacker mines faster than the network and 3) spends their cryptocurrency on the main network.

4) The attacker goes online, as they have the longer chain the rest of the network will trust it and 5) they can then double spend.

6) Eventually the attacker will cease the attack, but will have by then double spent their funds.

Public, private and consortium blockchains


Bitcoin is a public blockchain is where the technology shines. The network is open for anyone to use, there is not central authority and proof of work is used to validate blocks.

  • Zero trust
  • Anyone can take part
  • Must adhere to protocol
  • Maintains protocol specification and decentralization via consensus algorithms such as proof of work
  • Allows entities that don’t trust one another to collaborate
  • Issues exist with privacy and scalability
  • Large number of participants protects the network; the higher the number the better.

Private (permissioned)

As people have tried to take advantage of blockchain they have developed so called private blockchains. The idea is that many organisations see some benefit to blockchain, but they don’t want it to be truly public. They want control of who can enter transactions and who can validate transactions / blocks.

  • High trust
  • Block creation power granted to set number of participants
  • Could have a single entity responsible for syncing the entire network (proof of authority)
  • A business can have full control
  • Removes need to incentivize individuals to create blocks
  • Alleviates transparency concerns
  • Better scalability and transaction throughput
  • Attacks would come from nodes known to the network & users can be blacklisted
  • Just a database but with cryptography, immutability and transparency benefits.


  • Low trust
  • Consensus mechanism controlled by a limited number of nodes
  • Right of access can be limited to the predetermined nodes or made public
  • Partially decentralized
  • Could be set up in such a way that all nodes have to sign transactions
  • Power isn’t centralized to one party.

Alternatives to proof of work

Proof of work presents some issues. On the one hand it’s extremely processing intensive and has received criticism from an environmental perspective. On the other hand not every organisation is comfortable with an open model where anyone has authority to verify blocks based on processing power. Alternatives are always under investigation, some examples include:

Proof of work

  • Computationally intensive
  • 51% attacks exist as a risk
  • Incentivisation costs are need for transactions
  • Reliance on a high number of nodes / high participation

Proof of stake

  • Ability to create new nodes is based on proportion of currency held
  • Solves computational issue
  • Can lead to hoarding

Proof of authority

  • One or more nodes are certified as authorised to create new blocks
  • Essentially reduces blockchain to a something more similar to a traditional database from a centralization perspective.

Other topics

Cryptocurrency vs. token

Cryptocurrencies have been around since bitcoin. More recently ‘tokens’ have become a popular topic.

  • A mechanism to represent a physical item / value in a digital way
  • Could be considered as a share in a company rather than a currency unit
  • Ethereum works by using the token called ‘Ether’
  • Can be created arbitrarily on a network (e.g. create a project and sell to fundraise)
  • Governed in a similar way to coins
  • ICO – initial coin offering; crypto version of IPO, these are not well regulated.

Ethereum smart contracts

  • Ethereum supports smart contracts
  • Ethereum has the ability to store and execute code on the blockchain
  • This code; written in assembly / solidity (similar to javascript), can allow you to create, “IF.. AND.. THEN” type of executable version of a traditional contract
  • For example: IF a deposit is paid AND property checks are confirmed AND funds + deeds are made available THEN transfer money and ownership deeds between parties.
  • The code is executed by the Ethereum virtual machine (EVM)
  • The EVM runs on every node, so it works as a distributed computer
  • Processing isn’t free, it’s pay per computation (with Ether)


  • Blockchain is immutable so once the contract is created, it cannot be changed
  • No central party is required to hold / execute the contract
  • Contracts can engage with other contracts so complex chains of business events can be handled
  • As EVM is distributed it’s highly reliable


  • If there is an error in a contract it’s very difficult to fix
  • Processing time is slow
  • Due to the distributed nature of EVM every computer on a network has to process
  • Storage of data / information and processing fees are expensive
  • Not widely proven with scaled live examples
  • While blockchain itself is immutable and transparent the triggers or rules for the contract can easily be manipulated potentially defeating the benefit of using blockchain.


  • Higher level apps that use smart contracts
  • i.e. connecting together more advanced logic across one or more contract
  • Web or mobile app front end, digital connecter for blockchain, then decentralized networks (EVM)


  • Robust to attack
  • High fault tolerance
  • Access to value, tokens, currency
  • Easy payment processing
  • User verification simplified
  • Self-sovereign data management
  • User data more protected
  • Higher trust with public trust


  • Running code is expensive
  • Storage on the blockchain is expensive
  • Every node runs every process
  • Must wait for transactions
  • Smart contracts are permanent
  • Scaling is a concern (speed of the slowest computer)

Final thoughts

There are a number of questions to ask when considering blockchain as a solution. From my perspective if we are considering a truly public application where we have solid requirements such as:

  • We don’t want any party to have sole authority
  • We want an immutable record (unchanegable)

For other cases where we are considering a non public network where we might restrict who can add transactions and how blocks are verified then it might be better to consider a more traditional database. Keep in mind blockchain comes with a number of disadvantages

  • It’s slow (for various arhitectural reasons)
  • It’s difficult / impossible to resolve any errors with historic transactions
  • It’s not computationally efficient.

Questions to consider:

  • Do I need a database – if so what architcture is optimum?
  • Are there multiple writers?
  • What level of trust exists between writers?
  • Can I / do I want to rely on a trusted party?
  • Do I want to limit access or validation control?
  • What’s more important immutability or efficiency?
  • Do I need to make it public?


A summarised version of this is available in google slides, please do cite me if using this.

I found the following resources extremely helpful when first using blockchain. You will find the inspiration for some of my diagrams here:

(featured image icons made by Becris from

Articles Technology

A No-Nonsense Guide to Digital and Technology Strategy in 2020

‘Digital’ – A Magnet for Nonsense!

‘Digital’ was and still is a popular term in the business world. In recent years a lot of papers, presentations and communities of interest have appeared on this topic. Unfortunately, the majority of them seem to create a lot of content, but little meaning. Put simply, a laymen could read a typical brochure on ‘AI’ or ‘Blockchain’ and still have no clue what the physical product or service is and how it differs from more traditional techology.

Back in 2000, I first started my career in information technology, not long after joining we were re-branded as ‘information & decision solutions’. I think the majority of people in information technology will be familiar with the constant re-branding of teams and functions.

This is the case with ‘digital’

Digital technology is a hodge podge of technologies which are new / popular / highly saleable. I tried to figure out what the common theme is with technologies that get accepted into the ‘digital’ podium, but there doesn’t appear to be one.

In this post, I’d like to have a plain english, no-nonsense discussion of digital (or more simply new & popular) technologies.

I’ll start by taking a look at what major companies are saying about digital, then I’ll look at structuring it into a simple taxonomy to promote a clearer understanding. I’ll then take a look at key areas of digital one by one. Finally I’d like to talk about the approach taken to develop a digital strategy.

Before getting into the detail I would like to give some advice to anyone thinking about investing in digital products or services:

  • Take a ‘doubtful’ stance, don’t get caught up in the hype;
  • If someone can’t explain a technology; how it works and why it’s useful, in a few sentances to a laymen, do not listen to them;
  • Be careful of conflicts of interests when dealing with suppliers and partners, I’ll talk a little more about this next.

The ‘Digital’ Cash Cow

There is no doubt that ‘digital’ was and is a cash cow for consultancies, systems integrators and other tech orientated firms.

It’s a perfect sales opportunity for these businesses:

  • It covers a broad range of products and services;
  • There is a level of mystic involved;
  • It’s fast moving, and hard to stay abreast off;
  • In some domains a high degree of technical competency is required;
  • There are stories of massive wealth / success (e.g. Bitcoin).

This creates a situation where companies want to invest in it, but they are not always well placed with knowledge and skills to plan or execute.

There is a positive role for consultancies, systems integrators, research companies and even independent contractors in helping define and implement digital strategy, but extra care needs to be taken:

  • Make sure you are not being sold nonsense!
    • Looks for clear and easy to understand descriptions;
    • Look for live examples that are delivering benefits;
  • Don’t use expensive partners for very simple technologies – Robotic Process Automation is a good example of this;
  • Don’t invest in overly complex solutions – I’ve seen a variety of ‘proof of concepts’ developed with Blockchain where a more traditional database would be much more suitable;
  • Be careful to ensure that people involved in digital strategy have the right experience and expertise. As digital and tech has become more popular it has attracted a lot of professionals who lack real experience and understanding of technology.

Let’s Look at Specific Technologies

The scope of digital is loosely defined. If I were to brainstorm a list of terms from the top of my head it might look something like this:

  • Internet of things / smart things etc.
  • Blockchain / distributed ledger
  • Artificial Intelligence
  • Natural language processing
  • Voice recognition
  • Facial recognition
  • Virtual reality / Augmented reality
  • Mobile devices – 5G etc.
  • Geolocation / maps / google earth etc.
  • Robotics
  • Next generation ERP
  • Cloud

But, as a more structured starting point let’s start by looking at what some experts say as of May 2020.

What the Experts Say

I’ve decided to look at two firms; Accenture – which can represent both a management consulting and systems integrator perspective and Gartner – which can represent a research perspective.


Accenture UK’s technology home page leads with a 2020 trends report entitled, “We, The Post-Digital People – Can your enterprise survive the tech-clash?”.

Despite the vagueness of digital, I like the use of “post-digital” in their title, it suggests a broader way of thinking than simply referring to a bundle of new technologies as digital.

Accenture start by referring to tech-lash (pushback against tech), before highlighting data to infer people are generally still positive about tech, they then coin the phrase tech-clash as a way to describe the situation we find ourselves in where tech is theoretically good, but often isn’t designed or implemented well. I like this viewpoint and I think it summarizes the one of the major challanges we face designing and implementing technology.

They go on to talk a challenge existing in how companies plan and deploy technology according to business / customer requirements etc. It reads to me that their viewpoint is that the old way of managing tech is no longer appropriate.

I am not sure about this. I think that before we say that we have to check whether a business has a formalized and effective way of managing their tech portfolio (many don’t), after we assess that we can think about whether it works for new technologies. To my mind; theoretically, methods like ITIL and COBIT etc. should work with new and old technologies alike. In fact, when you think about it, technology by it’s nature has always been new & disruptive. Can you imagine the excitment on the project to set up the first mainframes!

I would definitely accept that many technology departments have become bogged down in with too many processes / levels / standards / products etc. but this should be fixed regardless of ‘digital’.

Following this brief intro Accenture call out 5 key trends.

If I read the text and try to pull out the technology products or services I get the following:

  1. The I in Experience
    • User experience
    • Data ownership / privacy
    • 5G
    • Augmented reality
  2. AI and me
    • Automation of simple tasks
    • Collaboration between human employees and machines
  3. The Dilemma of smart things
    • I’m not sure what this refers to but it sounds like systems for subscription style products e.g. Peloton or Zipcar
  4. Robots in the wild
    • ‘Physical’ robots outside of factory / industrial use
  5. Innovation and DNA
    • Distributed ledger / blockchain
    • Artificial intelligence
    • Extended reality
    • Quantum computing

Let me critique them one by one:

1. This is pretty clear. It’s a focus on major points of importance or interest for the end-user. However I wouldn’t call this a new trend. This has always been a key area of focus for technology. The topics covered are also quite wide and don’t centre around any specific technology, plus it omits some key end-user topics.

2. The is not clear to me. I assume AI refers to artifical intelligence. Looking at the specific examples cited, automation of simple tasks is not something that would require AI (there are problems with this term that I’ll come to later). Collaboration between human employees and machines could mean almost anything related to technology, but I accept if it get’s more specific there is some really interesting stuff coming in this space.

3. I’m not clear at all what this means. If I was to guess ‘smart things’ would refer to smart devices i.e. internet of things, but the description points more towards subscription style services. Smart devices and subscription combined do open up a lot of interesting scenarios.

4. Robots in the wild is fairly clear.

5. This one seems clear, but appears to be a catch-all for other areas of interest that don’t fit the four themes above.


Navigating to the Gartner information technology home page a number of featured articles / insights are shown.

Looking through this page of trending topics the following technologies are mentioned:

  • Internet of Things
  • Cybersecurity
  • Autonomous things
  • Blockchain
  • Digital twins
  • Smart spaces
  • Artificial intelligence
  • Cloud

This is the kind of basic list that I might expect to see. And very typical of the issue of using generic terms w/out explaining what we are really talking about. The only one that stood out as less common was digital twin, “a replica of a living or non-living physical entity”. This immediately reminded me of the interesting article of how the model of Notre Dame in the computer game Assassins Creed could be useful in re-building Notre Dame following the fire damage in 2019. I’m also reminded of Elon Musk talking to Joe Rogan last week on the more sci fi aspect of digital copies of living beings.

It’s a little more challenging to critique Gartner as most of the detail is hidden behind report downloads.

For the purposes of the critique on Accenture and Gartner I am purposefulluy only looking at their high level descriptions. They should be able to clearly explain the ‘how’ and ‘why’ of their viewpoint on digital strategy to a layman on their landing page. It is arguable that if I dig into the detail I will get a much clearer view, but past experience of doing so is a hit or miss.

This particular critique aside, I’d note that Accenture and Gartner both have some excellent content and services.

Creating a Map for New Technologies

To build a better understanding of how this all fits together the first thing I suggest is to build a simple map of digital technologies that you may be interested in. I think it’s better to cast a wide net in the beginning and then eliminate those that may not be relevant to your business.

By map, the form can be a simple categorised list. There are different ways to approach this, one might be to categorise the technology by the way it impacts the user, another might be to categorise the technology by how it works or what it does.

The digital maps presented by companies are often confusing as they categorise things in various ways in one list. One minute they are looking at the end user impact, the next how the technology works.

I prefer to first categorise the technology according to how it works and then look at customer impact as part of a value assessment. One major advantage of this is that it fits well with traditional technology methods and aligns well with how systems architecture is managed.

For this discussion, I’ve 8 category buckets:

As with any taxonomy you can spend a long time debating the right categories. In my experience it’s best to draft a hypothesis quickly, debate with some colleagues and don’t be afraid to adjust as you go.

In this example I split user experience into three sub categories, I wanted to categorise virtual and augmented reality as primarily ‘visual’ ‘user expereince’ technologies.

After categorisation, we can start to note in specific technologies that we want to consider for our business. Let’s fill in the matrix with my list, Accenture’s list and Gartner’s list.

This is as far as I’ll take this taxonomy for this discussion, however for a real business I might turn it into a matrix in various ways allowing me to map e.g. benefits or business units to the technologies mentioned. I might then colour code by complexity or value etc. This should be a useful format to ensure a team / function have a similar understanding of what is being discussion.

Let’s look next at each of these categories in more detail.

User experience – touch / type

The way we interact with desktops, laptops, tablets, phones and smart watches etc. is continuoully evolving. At one extreme – smart use of touch on mobile, and at the other – more traditional technologies are investing heavily in user interface (e.g. the major ERP company SAP focussing on their customisable Fiori interface).

User experience – virtual

Augmented reality – An example I discussed with colleagues last year is a product based on glasses which can project context-relevant information. Imagine you are onboarding a new shift worker in a factory. The worker wears the glasses, then when looking at varius parts of the manufacturing equipement, the glasses overlay operating instructions or status info. This can accelerate on-boarding, reduce errors, reduce downtime etc.

Virtual reality – An easy example is training for certain dangerous or difficult jobs e.g. pilots. As virtual environments get better and VR wear becomes cheaper and more accessable I expect an explosion in this space.

Smart spaces – A smart space is simply a space which includes multiple smart devices that can connect together to give a space relevant experience or benefit. Examples include airports with facial recognition for passport control and barcode scanning for baggage handling. Or alternatively hospitals with trackers for patients and medical equipments / drugs etc.

Non-traditional databases

Databases are a broad and complex topic. Luckily most business people don’t need to a deep understanding of database technology. However as databases are being used in marketing and sales materials it’s worth investing a little time to understand the basics.

The last decade has seen something of a revolution in database design. Traditionally databases were designed to record and store primarily numerical records. Think of a list of shipments or a list of accounting entries. As IT hardware became cheaper and the internet arrived on the scene data volumes exploded and shifted from primarily numerical to a wide variety of formats; images, text documents, audio, video etc.

Databases rely on database management systems that control how information is writen and read. Advancements in the management systems as well as hardware has created a lot of new database products that have massively changed what is possible.

Big data, refers to the ability to handle massive amount of data across different hardware. This is a technical solution that can allow companies to handle these massive data volumes in an efficient way. There are excellent articles out there which outline examples such as the way Amazon set’s up it’s data centres. In a nutshell by using multiple devices cheaper technology can be used at scale rather than cutting edge expensive devices.

In-memory computing, refers to the increasingly cheap price of random access memory. This means more information can be stored and read without writing to disk. In general a huge part of the response times of computers relates to the time taking to read and write data. In-memory computing has allowed traditional systems such as ERP to become much faster. The major ERP company SAP have led with this using in-memory to develop a new database management system they call HANA. Up until recently systems landscapes have been designed with one ‘operational’ database for recording information and one for analysing information. This is because it’s difficult to optimise a traditional database to both read and write effectively. HANA is disruptive in that it can work effectively as an operational database and an analytics database.

NoSQL, refers to a wide range of new database operating systems that can handle non traditional data requirements. A popular example is MongoDB; a document orientated DB.

Distributed ledger / Blockchain, I choose to categorize Blockchain as a database as it’s a technology that essentially records information. The benefit of a public block chain network is that the information is ‘immutable’ i.e. cannot be changed. And also, it can be distributed amongst participants with no central ownership. These are great benefits and make blockchain very interesting. However these only apply to a truly public network. Many corproate applications of Blockchain are not public, for those that understand the tech they replace proof of work with proof of authority. This removes the benefit and in my opinion a traditional database would be a simpler, cheaper, and more appropriate solution.

Information Processing

I think this is the area that lacks clarity the most and is the area where we see terms such as AI or algorithms being used to make products and services seem more advanced than they are. Let’s take a look at some of the key terms:

AI / Artifical intelligence: This term should set alarm bells ringing in your head. I think it has become meaningless through application to almost any technology product. Some people will label any system with logic that replicates human behaviour e.g. IF the kettle is boiling, THEN pour the water in the cup, as AI. Other people will only consider something as AI if it can beat a human at Chess and has potential to wipe out humanity! You can’t take this as a meaningful term when considering technology.

Machine Learning: This is getting closer to a specific technology. Machine learning describes the ability of a computer system to ‘train’ itself. Machine learning is very popular in the field of image recognition. An often cited example is giving a system 1,000,000 images of cats on the internet, the system will learn to recognise when a photo on the internet has a cat in it. Machine learning is a general term that describes this, but is still not specific in how the technology actually works.

Neural Networks: This is one type of machine learning. It’s based on an attempt to mimic the way the human brain works. It’s constructed of ‘nodes’ that mimic nuerons and each carry out one simple operation. Layers of nodes can then carry out more complex operations. Nueral networks are quite interesting and worth a read.

One important thing on machine learning and neural networks is that they have to be trained on existing samples and often a very high volume. If there is any bias in those samples the neural network will build in that bias. I believe there are already examples related to insurance quotes for minorities etc. I expect to see a growing need to audit these and potential litigation here in the future.

Algorithms: An algorithm is simply a mathematical formula. If I have a small program that converts degrees fahrenheit to degrees celsius I could brand it as an AI algorithm driven solution.

Analytics: Another term that is quite often misused and can represent anything from very simple to very complex. Essentially when talking about analytics we should be referring to applied statistics and mathematics. Sometimes analytics is broken into the following:

  • Descriptive: Explain what happened and why
  • Predictive: Forecast what will happen in the future
  • Prescriptive: Understand why what is forecasted will happen.

The bottom line in information processing is to make sure to understand what specifically is being talked about.

  • If buying or building an analytics solution I want to know what specific statistical and mathematical methods and models are included.
  • If I am buying or building a machine learning solution I need to understand the details e.g. is it a neural network, how much training is required, what is the accuracy, how is biased handled etc.


Cybersecurity is a complex topic that deserves it’s own detailed discussion. Advancements in computing power, analytics and the volume of data stored in a cloud environment make it easier than ever for actors to attack private networks. With this in mind any organization needs a solid cybersecurity plan and also needs to carefuly consider the security impact of any new digital technologies brought into their network / architecture.

The best way to get a feel for the importance of cyber security is to listen to some episodes of darknet diaries


Traditionally systems are not advanced in how they manage data. A good example is GDPR which tightly controls what personal information can be held and for how long. Any systems that handles personal data has to have capability to manage this. Further to that specialized ‘data management’ systems exist that can help to manage that across an organizations technology landscape.


Internet infrastructure and standards themselves are an important enabler for new products and services covered in other areas. This can be particularly important when considering customers from different geographies and income groups where their method and quality of interent access will vary.

Internet is a key consideration for a wide range of technology initiatives such as Cloud / homeworking / offshoring etc.

It’s also particularly important when designing mobile applications. Does bandwidth support video calls, does the internet infrastructure support geo-location etc.


Different form factors create opportunities for how we use various componenets of technology with end users.

Mobile in particular has had and continues to have a hugely disruptive effect on traditional industries. Think of staffing, delivery and taxi’s. Mobile devices have allowed app based businesses to form and succeed which utilise the following capabilities in conjunction with a mobile device:

  • A customer user interface with booking / delivery requests etc.
  • A partner user interface to sort / display active requests and allow acceptance
  • In built e-contracts / legal documents where necessary e.g. staffing
  • Geo-location / map integration showing partners where to go e.g. in the case of delivery to the customers location or staffing to the work location.
  • Pay integration – ability to pay via card / paypal etc.


In the technology map I’ve considered two forms of automation

Physical robots is it’s own space and I won’t consider it in detail here.

Process automation or ‘robotic process automation’ is a fairly traditional space. There has been a recent boom in this with firms such as UiPath becoming quite successful. This is often branded under ‘digital’ as exciting and disruptive, however the technology at play is very simple.

In a nutshell robotic process automation allows you to take a set of steps a user does with one or more systems and automate it.

For example, if an accountant looks up a record, then checks the client against another systems, then say checks a rate against another systems and then approves or declines, if fixed rules for all cases can be written this can be automated using RPA.

I recall around 15 years ago we used automation tools to mass test transactions in ERP systems which more or less did the same.

I have a couple of recommendations on RPA

  • RPA itself is very simple and does not require consulting or systems integrator assistance. Companies can learn to develop RPA scrips themselves, simple training is all that’s required.
  • However I do recommend RPA work should only be considered as part of a broader process improvement initiative, there are better options than RPA in many cases e.g. eliminating the process entirely or changing underlying systems that require high volume of manual effort.

RPA could be considered as a ‘band aid’ that sits on top of poorly designed systems. It can provide a large benefit in terms of freeing up a lot of human time, but all RPA scrips will need to be managed on an ongoing basis.

Next generation ERP

Traditional enterprise resource planning software providers such as SAP, Oracle, Microsof etc. are also developing their own disruptive changes. We already touched on the HANA database which has allowed them to vastly improve their business suite product; which is now called S/4 HANA.

There are too many new and changing products in the ERP space to cover here. However a noteworthy area of interest that I would like to highlight is subscription management.

Traditional customer relationhip management systems are not designed to handle subcritpion models, however this is becoming a more and more popular way to engage and contract with the customer.

Creating a digital strategy

To cut through the nonsense at the strategy level I recommend we should treat digital strategy as no different from any other part of strategy. Innovation should be a fundamental part of strategy and digital is simply an innovation slant on technology.

Different companies do strategy in different ways. Generally speaking there is a higher level corporate strategy that will define key targets (sales, margin etc.) as well as direction for each business unit (e.g. objectives for products and customer groups).

The strategy then will typically flow down to individual business units who will create a more detailed plan that aims to deliver the goals in the corporate strategy.

Information technology should be part of the plan for each business unit (specifically how tech will support that BU), and should also have it’s own comprehensive plan.

The plan for information technology itself should deal with topics such as overall architecture, systems development and systems support etc.

When you consider this process, they key for a successful strategy is to ensure that the IT experts are correctly involved at each stage.

  • The CIO with the support of senior architects works with the Executive Committee on technology elements of the corporate strategy. This will often focus on things such as budget for major projects, new technologies to support business objectives.
  • Domain specific architects and technology product experts will work with the individual business unit leads on the business plan for each business unit, ensuring that technology is embedded in each plan.
  • Finally, the information technology strategy will involve all key leaders in information technology and bring together everything they are doing.

At each stage thought should be given to how technology can be contribute to the business. Some smart questions to ask are:

  • What technologies are our competitors investing in?
  • Are there any ‘new digital businesses’ entering our market segments (if so, find out everything about them!)
  • What direction are our existing technology partners taking with their products / services
  • What are the experts saying about our industry / geography etc.

This is a somewhat simplified view of strategy development. I would highly recommend companies take a proactive approach to optimzing strategy. If your strategy does not result in good plans for digital it’s highly likely you are missing other opportunities in the market and not addressing all relevant business risk.

I’ve seen instances of talk of creating seperate digital strategies and forming seperate digital teams. I don’t like this approach for a number of reasons. This will end up in silo thinking and silo product development. It might have to be done on a tactical basis, but I would not recommend it.

If you silo digital thinking too much the following issues may occur:

  • Your digital investments may not align well to business objectives as it’s somewhat removed from the general strategy and planning process;
  • Setting up a new ‘digital’ team is likely to result in a group of people who are biased towards digital and are more likely to invest in products that are not yet ready or have a lack of value;
  • Even if your digital investments are successful they won’t bring the rest of the business along with them.

If the existing organization lacks capability and capacity to embed digital in the existing strategy process and existing business management systems then I simply suggest adding new employees or consultants into the existing teams to beef up capacity and capability.

Those people can also create a virtual CoE on digital to bring thinking together and present summarise on the topic, but the key thing is they are embedded with existing management in all units and levels.

Dealing with digital disruptors

If you are are in an established business facing competition from ‘digital’ disrupters e.g. new app based businesses. I would recommend splitting your digital aspects of your tech strategy into two:

  • Innovation of existing products and services. This will often involve things like automation and improved analytics.
  • Development of new products and services based on the ‘art of the possible’ with new technologies.

The reason I would split this out as it may be impossible to leverage new technologies on existing processes. Existing IT architecture may also make it impossible or very expensive or difficult to change some existing processes and systems.

This may sound like I am contradicting what I said earlier. This work should still be developed and done within your existing strategy process and management structure, but the products defined should be split on this axis.

This is really an accelerator for businesses facing current or future market share loss due to disrupters.

What’s your view of ‘digital’ technology?

What technologies did I miss that you think are interesting?

What would you be interested to read more of my thinking on connected to this topic?

Articles Finance Transformation Process improvement

Optimization of the finance record to report process

“A complex, lengthy process, often not well understood”

Starting from business transactions such as the purchase of materials, payment of employees, execution of financial transactions and ending with reporting and decision making; including the submission of detailed annual reports, the record to report process (RtR) is a long, complex process that involves people from across the enterprise.

Despite the critical nature of the process it’s rare to find RtR clearly documented from end to end, few employees can describe the process in detail from start to finish. This could be in part due to the process extending from the core customer facing business through to technicalities of statutory and regulatory reporting. Some excellent papers and books exist, but many focus only on individual aspects of RtR.

Defining RtR

There are different definitions out there, from a narrow view such as; primary financial statements (balance sheet, profit & loss, cash flow) only, through to a wider view which includes aspects of planning, management reporting and regulatory reporting.

It’s useful to start from a wider view, after all the primary statements are a huge dependency for these other processes.

The majority of organisations have followed the trend to address processes from a horizontal or value chain perspective, however it’s observed that while processes are named on a horizontal basis the organisation and method of managing process, data and business applications is often still done according to traditional functional leadership models.

For the purposes of this post we will consider all business transactions and reporting with financial impact as part of the extended record to report process.

The big picture

Every step of RtR is beset by problems due to errors in previous steps, this is another reason why it’s important to take a step back and look at the end to end process. Secondly it’s important to retain a business based view on what the process aims to achieve. It’s possible to find entire accounting teams working exclusively on the preparation of one IFRS disclosure, whilst highly capable they cannot always describe where their input data came from or the originating business transactions. Stepping outside the need to comply with specific requirements on a technical basis a core aim of financial and management reporting is to ensure that information accurately reflects the reality of the business.

Anyone working in record to report should understand the context of:

  • What is the current corporate strategy (targets, markets, risks etc.)?
  • What is the current product set (products, customer base etc.)?
  • How does this fit with their part of RtR? – how does this relate to the numbers, analysis, commentary etc. that they are working on?

Start from the customer

Useful tools from six sigma include ‘voice of the customer’ and customer satisfaction, with RtR it’s highly beneficial to start from the requirements of external and internal end customers and then work those requirements back through the process. End customers of RtR include:

  • Statutory authorities – filing of accounts & other disclosures (IFRS / local GAAP)
  • Tax authorities – filing of tax returns
  • Regulatory bodies – disclosure of required industry specific reporting, for example:
    • Insurance & banking – assets, valuation, product sales, transactions, liquidity etc.
    • Pharmaceuticals – US FDA or local equivalent requirements
  • Shareholders, market analysts etc.
    • Half year and annual reports
    • Ad hoc announcements and reports based on key business events
  • Other parties
    • Special topics such as sustainability, diversity, health and safety etc. which may include some limited financial information
  • Internal management
    • Financial information as a basis for planning, budgeting, and generation of performance indicators utilised in making decisions to steer the business.

When considering management reporting it’s useful to think of RtR within such context of management models such as “plan – do – check – act”. At a simplified level the business transactions we record provide a continual way to measure the “do” while the resulting information output in reports and analysis provide a basis for “check” and “plan”. Thus the planning process is closely connected, if not part of RtR.

What does the customer need or want?

Unfortunately a lot of efforts to improve RtR fail by going straight into mid-process improvement without ensuring they are focused on the right outputs. Requirements are often based on what the customer receives in the ‘as is’ process or guess work.

An issue lies with availability of senior management and top experts. CEOs and CFOs rarely have the time to sit and explain to a project team the exact structure and wording they would like to see in their annual report, likewise general managers rarely have the time to help design every line of a monthly business review reporting pack.

There are also risks and issues in engaging with statutory and regulatory bodies or with external analysts at this level of detail; specifically trying to uncover requirements without giving away sensitive information about the operation of the business.

Regardless of the challenges it’s critical to try and connect with the end customer in order to define the answer to questions such as:

  • What is the minimum that has to be reported to maintain shareholder and market confidence?
  • On top of the minimum what extra information needs to be reported, what benefit does it give and what is the cost?
  • What questions and decision is each report attempting to answer?
  • How does reporting capability compare to competitors – content, timing, quality?
  • How will requirements change over time for each report / information area?

These answer are not only needed for the design and implementation of reporting and analytics, they are also needed at the stage of data model design for business applications. Enterprise resource planning system projects (such as SAP and Oracle), can fail due to poor data model design. I’ve seen multi-million pound ERP projects occurr without any noteworthy discussion on reporting needs. Companies have been known to have to re-implement the same ERP to fix this.

Remember The Core Business

On the flip side of understanding the customer is understanding what exactly is being manipulated and consolidated to provide a set of accounts and reports.

Often RtR improvement initiatives focus on ‘turning the handle’ of mechanical processes to produce a set of numbers. Converting an insurance policy to a general ledger entry, consolidating to a group level, eliminating inter-company, summarising into financial statement format etc. ERP systems, data integration experts and consultancies traditionally focus on these mechanical steps. In recent years a lot more focus has been placed on data warehousing, analytics, dashboarding, formatted reporting etc. however there is still a gap in knowledge and capability around business analysis.

This involves understanding the context of a business transaction and how that affects the accounts, this context is often provided manually via supplementary ‘outside of the main systems’ excel style reporting. Whether talking about variance analysis, current vs. prior period analysis, balance sheet substantiation or other activities the RtR process requires the capability to clearly explain the position of each account or KPI during each reporting period. A huge opportunity exists to do this in as automated a way as possible and reduce lengthy streams of communication to explain and resolve unexpected results.

The end to end process

It’s worthwhile to look at a simple illustration to show how RtR breaks down into a complicated process. RtR can be considered a top level enterprise process which consists of many individual processes that chain together. This can vary considerably by industry, organisation design and technology. This illustration is not nearly complete, but shows some sample individual processes which by themselves can also break down further into additional sub processes.

As the illustration shows a large part of RtR is a periodic process. A key part of periodic RtR is a repeatable standard timetable. It could be suggested that RtR is more akin to a project than a typical high volume transactional process. Within the timetable it’s worthwhile to apply critical path analysis. If the critical path is understood resources and management attention can be placed on this, while other activities can be moved around flexibility to help smooth out the peaks of the resource requirements.

Dealing with common pain points

RtR improvement can be delivered via new ERP or business intelligence software, however often large parts of the process can be improved without any systems work. Included below are some illustrative common pain points in RtR for the purposes of discussion.


  • Time wasted during periodic activities – asking questions / discussing the process – a big resource drain and can distract key resources at critical times
    • End to end process flows + detailed procedures + clear accounting policy + known problems / issues lists can help
  • Silo knowledge in sub components of the process – up stream inputs are not well understood and downstream requirements not met
    • Cross training / work rotation can help
  • Excessive waiting time throughout the process, waiting for data, waiting for management review etc.
    • When mapping process make a distinction between time factors of “effort vs. elapsed” – this will highlight waiting time. Review processes should be formalised – time, scope, quality etc.
  • Overproduction – producing reports where not all information is utilised
    • Periodically review all outputs with customers on a line by line basis
    • This is perhaps one of the biggest issues, where requirements are continuously added, but old requirements never reviewed. An effort / benefit consideration should be made when considering reporting requirements.
  • Over-processing – possibility of two many reviews – poorly defined review or control structures
    • Understand exact control and decisions making requirements and ensure implemented appropriately
  • Over-processing – excessive time on minor adjustments both to financial numbers and commentary / messaging that may have negligible impact on customer.


  • Multiple non standard chart of accounts
    • CoA simplification is straightforward and can be run by an individual or small team. After simplification tight and formalised control on new account requests
  • Profit and cost centre hierarchies that do not mirror business structure
    • All key data hierarchies need to be designed by business and application experts and carefully controlled, unfortunately this can be difficult to correct
  • Poor quality data
    • A complex area, start with creating a data governance organisation to look into process, maintenance, applications etc.
  • Over reliance on data integration technology with custom validations and mappings
    • Try to standardise around one set of technology, try to build application architecture longer term to avoid the need for mapping by ensuring data structures are common across systems (e.g. golden source)


  • Localised heterogeneous systems with different business language
    • Expensive to correct, short term try to standardise data standards across systems, long term move towards common shared systems
  • Proliferation of systems to meet different needs w/out consideration on effort to align data and manage the process e.g. business transaction systems to local ERP to group consolidation to group analytics to group publishing with various regulatory engines, tax engines etc.
    • Invest in a true enterprise architecture function which should sit between business and IT to be successful, create clear standards on use of technology and have appropriate governance control on selection and implementation of new technologies
  • Excessive use of spreadsheets to workaround poorly designed / implemented business applications
    • Start by creating an inventory of all ‘end user applications’ assess risk and feasibility to replace by systems with adequate controls and reliability


  • Ability to share work to deal with peak periods
    • People in RtR tend to like to specialise but due to nature of periodic work it leads to imbalances in work load, where possible look at sharing work in busy periods across teams
  • Junior staff dependent on inputs from senior staff
    • In some scenarios junior staff e.g. group accountant depend on info from senior staff e.g. local CFO, ensure that clear escalation and stakeholder management support exists
  • Review cycles – no. of reviews, quality of review inputs, contradictory review points over successive reviews
    • The entire business analysis, commentary, interpretation, review and approval process is poorly supported by most business applications, put focus on designing with process maps, procedures etc.
  • Misaligned priorities of work between groups – local finance, group, reg tax, downstream processes suffer
    • Using end to end process maps create more education on the impact of each functions work on other functions to create a stronger unified view of what success is.

Policy & control

  • Accounting policy not optimised to deliver the defined level of quality within time constraints; Rules around allowed adjustments at which times, Materiality thresholds for adjustments, Number of reviewers, role of reviewer, no. of times reviews carried out, Guidance on handling of repeat accounting issues”
    • The accounting policy should be clearly documented and kept up to date to answer any accounting related questions that repeatedly come up or to address accounting related pain points in the process. This is often under-utilised
  • Clear demarcation of responsibilities for accounts / issues / policy points between finance, tax, regulatory functional groups including also handover responsibilities in process
    • Often RtR experiences wait time while discussions are held on responsibility to resolve issues, there should be clear accountability matrix, this includes ownership per account, ratio, KPI, process step etc. Resolution owner does not necessarily have to be equal to the owner of all actions to resolve

How to approach long term improvement

Whether dealing with small scale continuous improvement or large scale systems implementation there are a number of things that can be done to improve the chance of making long term sustainable improvements to RtR, most of these deal with organisation structure and culture.

Assign a horizontal process lead

Assign an owner to the end to end process. It’s important that this person has authority over the end to end process and can command respect from all participants. Often this role fails as the process lead lacks power outside of their own function. Recommended responsibilities:

  • Ensure the process is documented and well understood
  • Ensure that appropriate policies are in place
  • Maintain issue list, problem list, continuous improvement list
  • Approve process, systems and organisation changes
  • Escalation contact for policy, process, systems, data issues during process execution.

Create a design authority

Organise a group with representation from all functions in scope of the process including IT. This is a senior group that can advocate for improvement work and can sponsor work through resource and budget allocation. Recommended responsibilities:

  • Review, validate, sign off any proposed change work
  • Prioritise change work based on issues / problems and provide budget / resources.

Create a global issues & problems list

Regardless of the state of process and systems documentation it’s recommended to start with a log of issues and problems encountered in the current process. This can be developed over time as and when issues are encountered. It’s recommended to use this list to capture a few specific points:

  • Impact of each issue – time, quality etc.
  • Root cause – what is the real cause of the problem
  • Solution – long term / workaround – can be used to note ideas is solution unknown Once established this list will provide a good basis for discussions around the prioritisation of change work.

Embed Lean culture

Lean is very easy to implement and brings huge potential benefit. Lean can be misunderstood; it’s been seen that some organisations jump straight to technical training on topics like 5S, Waste, Kaizen etc. however the heart of Lean is not about training a project team it’s about embedding the mindset to identifying issues, identify the root cause and feel empowered to propose fixes.

Most of the time people who run a process know what the issues are, however they lack a method to act on their knowledge. Implement Lean from this perspective and encourage the update of issues lists with root cause and proposed solutions. Create a forum to review input from across the organisation and ensure the top opportunities are acted upon.

Focus on change mgmt. & governance

Often continuous improvement and change programs fail not due to the technical approach, but rather to the governance around requirements identification / validation, communication of the change purpose etc. therefore it’s highly recommended not to overlook change management roles and governance roles.

What About Business Applications?

Business applications and data flow is critical to RtR effectiveness, a lot of RtR effort is spent dealing with problems caused by suboptimal data and systems. A simplified application architecture for RtR may look something like the below:

1. A global standard ERP system that can handle all business transactions – purchasing, manufacturing, sales, marketing, human resources etc. a. Unfortunately this ideal architecture won’t work for industries that require specialised business applications such as financial services

2. Common data – the ERP has one chart of accounts, one common set of hierarchies and common master data

3. A single data warehouse for all enterprise reporting optimised for the size of the business / volume of data and access / manipulation requirements

4. A limited number of specialised software that provide functionality that the ERP and data warehouse cannot, this would typically include a consolidation engine for multi-nationals and analytical tools that can handle ad hoc reporting, multidimensional analysis, budgeting and planning (scenarios, versions etc.)

5. Software required for formal presentation to internal stakeholders or external parties typically including dashboards, formatted reports, publishing and electronic file transfer.

Additional considerations:

  • Platform or software as a service (i.e. cloud) isn’t mentioned in the diagram however depending on the size and requirements of the business it could be anything from 0-100%
  • Theoretically with new technologies ERP and data warehousing could be done on the same technical platform, however this is not yet common.

Unfortunately this simplified architecture is not realistic for most large businesses. Even in companies that run a global single instance of ERP this tends to be restricted to certain business units. Behind the scenes a number of other systems are often required to meet special business requirements or deal with data or integration problems.

A more realistic architecture based on the financial service industry

The below diagram provides an illustration of a more realistic application architecture. In fact this is still highly simplified versus the real world, but should hopefully highlight some common challenges.

  1. Many different business systems used in the front office to deal with client transactions, covering equities, bonds, transaction banking, retail banking, asset management etc. Each of these systems may have different data models and structures
  2. Separate financial, management and regulatory processes, they work concurrently on similar data and need to be reconciled throughout the process
  3. Many points of data integration as shown by black arrows, potentially different data technologies handling mapping, conversion, cleanup, reconciliation etc.
  4. Numerous data warehouses / data stores and various different reporting tools
  5. Different software applications used for different purposes, this can exist because of:
    • Acquisitions of businesses and continued use of their systems
    • Development of new processes and systems particularly in the regulatory space each time starting from scratch and adding new technology
    • Shadow IT within business functions building product or unit specific technology solutions where the business unit lead has P&L control to run their own IT spend

Fixing the kind of complex architectural problems present in multi-nationals is not easy, partly due to the organisation structure and stakeholders involved, with this in mind the most important step is introducing governance to take control on decision making on application usage:

  • Employee an Enterprise Architect – ideally not sitting directly in one business or IT but with leverage over both (perhaps office of COO)
  • Ensure that a application repository is maintained so that a current catalog of all approved technologies with licensing details is available – new projects can easily identify existing technologies to be re-used
  • Catalog ‘end user applications’ i.e. the use of excel, access and user developed technologies w/out formal IT support so that risk mitigation and replacement plans can be considered – these are generally control risks
  • Create a set of architecture principles that lay out the recommended software products and principles to select software products where required, these principles should promote things such as:
    • Using one data migration technology where possible
    • Optimising use of data warehouses
    • Trying to reduce overall number of different instances of software
  • Ensuring software is fit for business needs, for the above to work the organisation managing business applications must have business expertise and provide adequate solutions to business requirements.

The Future of RtR

One thing worth highlighting is the need to think about the process, systems, data and organisation aspects of RtR – most major consulting firms run their transformation projects with these streams – this does increase the likelihood of delivering effective improvements.

One point not as frequently discussed is disruptive processes and technology, for example:

  • Moving towards daily / real time RtR close
  • Fintech including blockchain as a ledger and it’s impact e.g. as a PtP sub ledger or banking ledger for a particular product
  • New database technologies and machine learning e.g. the ability to data mine a mass volume of local reporting (internal / external) to generate analysis to explain business performance

These are worthwhile topics to discuss one by one, however at this point none of them will solve the end to end problem of making RtR effective for most large multi-nationals.

Which RtR problems have you encountered?

Which future process changes or technologies are you excited about?

(cover graphic by rudityas |