Hidden Money Audit: A Real Case Study [15 Companies]
The group looked profitable. Consolidated net income of +CHF 180K. Nothing alarming. The kind of number that ends a board meeting early.
It wasn’t the whole story.
Underneath the consolidated figure, 8 of 15 operating entities were loss-making, with combined losses of -CHF 1.9 million. The group profit existed because one entity was profitable enough to carry everyone else – barely. And that entity had a problem nobody had noticed.
This is a real case. The holding company is called Alpen Gruppe AG in this write-up (name changed – real client anonymized; all entity names are fictional and all values have been proportionally adjusted). The findings are real.
What We Were Asked to Do
The original engagement was dashboard work. Build better reporting so the finance team could see entity-level performance more clearly. Standard scope.
Before building anything, we ran a probe.
The probe is a set of automated checks that scan every entity simultaneously, looking for patterns that don’t show up in consolidated reporting: budget overruns, cost outliers, AR anomalies, margin drift. It takes a few hours to configure. It surfaces things that would otherwise take weeks of manual analysis – if anyone looked at all.
Two findings came back within the first run. Both were significant. Neither was visible in the consolidated numbers.
Finding 1: The Group’s Best Performer Had a CHF 270K Problem
Bergmann AG was the most profitable entity in the group by a wide margin. Strong revenue, solid margin. On paper, exactly what you want in a holding structure.
The probe flagged something else.
Across five cost lines, Bergmann was over budget every single month in Q1 2026. Not a one-month spike. Not a seasonal anomaly. Every month, over plan, on five separate line items. Cumulative overspend for the quarter: CHF 270K.
That CHF 270K wasn’t visible at the group level. It was absorbed into the consolidated profit figure and disappeared. Bergmann still looked like the star performer.
The question this raises isn’t whether CHF 270K is material. That depends on the business. The question is: why did five cost lines run over budget simultaneously for three consecutive months without anyone noticing?
That is not a financial problem. That is a reporting problem. The profitability analysis was happening at the wrong level of detail.
Finding 2: The Loss-Maker Was Paying CHF 480K More Than Its Peers
Städtler GmbH was already loss-making at -CHF 220K. That alone is not unusual in a diverse holding group. Some entities take time to reach scale.
The probe caught something specific.
On two cost categories, Städtler was spending CHF 480K above the group median. Not above its own budget. Above what comparable entities within the same group were spending on the same types of costs.
This matters for one reason. If Städtler had costs in line with its peers, the loss would look very different. The entity might be close to breakeven. Or it might reveal that the costs are genuinely structural, which is a different kind of problem to solve.
Without the peer comparison, the loss just looks like a loss. With it, you can start asking whether it is inevitable – or whether it is a cost structure problem with a fix. The cost outlier pattern seen at Städtler is one form of the broader margin erosion dynamic — segment costs creeping above benchmark while the consolidated number stays quiet.
This is what root cause analysis applied to entity-level finance looks like. Not hypothesis-first. Pattern-first.
What the Group Numbers Were Actually Saying
| Entity | Net Result (CHF) | vs Budget | Probe Flag |
|---|---|---|---|
| Bergmann AG | +580K | -270K | Budget overrun on 5 cost lines, every month in Q1 |
| 3 mid-tier entities | +220K combined | On track | No significant flags |
| Städtler GmbH | -220K | -180K | +480K above peer median on 2 cost categories |
| 7 further loss-making entities | -1.68M combined | Mixed | Not analyzed in this run |
| Alpen Gruppe (consolidated) | +180K | – | Looks fine. Isn’t. |
The consolidated number was technically correct. The group made money. But CHF 180K of net income – built on the back of one entity carrying seven loss-making ones, while that entity quietly bleeds budget overruns – is not the same as a healthy CHF 180K result.
The difference is only visible when you look at the distribution.
Consolidated reporting, by design, hides this. That is not a flaw in accounting standards. It is just the wrong unit of analysis for a multi-entity structure.
What Did the Probe Actually Check?
The two findings above came from a structured set of automated checks, not from manual review of accounts. Here is what the probe covered in this run:
- Budget variance by entity and cost line – monthly and cumulative YTD, flagging any entity more than 10% over plan for two or more consecutive months
- Peer median cost comparison – each entity’s cost lines benchmarked against the group median for that category, flagging outliers above a defined threshold
- Margin cascade – gross, operating, and net profit margin by entity, to identify where margin is being lost in the P&L
- AR aging by entity – DSO benchmarked across entities, flagging outliers against the group average
- Revenue concentration – how dependent is each entity on a small number of customers or contracts
- Cost line frequency – how often does a given cost line exceed budget, and across how many entities simultaneously
Not all probes returned clean results in the first run. The AR aging analysis had a data issue that needed fixing. The intercompany transaction check was noisy and required refinement before it was useful.
First runs are like that. The budget variance and peer comparison checks worked cleanly. Those produced the two findings above.
How Is the Probe Built in Qlik Sense?
The probe engine runs as a separate Qlik Sense app, connected to the same data model as the main reporting environment. Each check is a set of calculated flags evaluated against configurable thresholds.
The peer median comparison works like this: for each cost category, calculate the median spend across all entities, then flag any entity above a defined multiple of that median. The core expression:
// Flag entities spending above 1.5x group median on a given cost category
If(
Sum([Cost Amount]) > 1.5 * Median(TOTAL <[Cost Category]> Sum([Cost Amount])),
'Outlier',
'Within Range'
)
The threshold – 1.5x here – is configurable per cost category. Some categories have more natural variance than others.
The budget variance check is simpler: actual vs plan, by cost line and month, with a running count of how many consecutive months a line has exceeded budget. Two consecutive months triggers a flag. Three is an alert.
Neither check requires custom development per entity. The probe runs across all 15 entities simultaneously. That is the point. The signal only becomes visible at scale, when you can compare entities against each other rather than looking at each one in isolation.
What Should C-Level Executives Do With This?
The findings above are not conclusions. They are starting points. Here is what to do with each type of finding.
If you find a budget overrun like Bergmann AG
- This week: Identify which cost center managers own the overrunning lines. Do they know they are over plan? In most cases they do not – they see their own costs but not the budget variance in a systematic, consolidated way.
- This month: Reforecast the year. If Q1 is already CHF 270K over budget on one entity, the full-year number needs updating. Operating on a stale budget is worse than operating on no budget.
- Structurally: Ask why this was not visible before. If your current reporting shows entity-level P&L but not cost-line variance against budget, that is a reporting gap. The gap is what allowed three months to pass unnoticed. Zero-based budgeting closes this gap by forcing every line to be justified from scratch each period, making overruns immediately apparent rather than buried in variance columns.
If you find a cost outlier like Städtler GmbH
- This week: Pull the flagged cost categories side by side against the median-performing entity. Is the difference in headcount, contractors, materials, or something else? The peer comparison tells you there is a gap. It does not tell you why.
- This month: Determine if the costs are contractually locked in or variable. Locked means a contract review problem. Variable means a management problem. They require different responses.
- Structurally: Decide whether the entity’s losses are structural or fixable. CHF 480K above peer median on two cost lines suggests the loss is not inevitable. But you need to verify that before drawing conclusions about the entity’s long-term viability in the group.
What every CFO in a multi-entity structure should do regardless
Stop leading with consolidated. Lead with the distribution.
A board pack that opens with group net income and breaks down by entity in the appendix is answering the wrong question first. The question is not “did we make money?” The question is “which entities are performing, which are not, and why?” That requires the entity view up front.
If your current finance dashboard does not show budget variance and peer cost comparison by entity as a standard view, that is the first thing to build. Everything else – root cause work, peer benchmarking, cost structure analysis – depends on this view existing. A structured diagnostic framework applied at the entity level is far more useful than the same analysis applied to consolidated figures.
What Does the Next Analysis Layer Look Like?
The probe run above was a first pass. It found two significant things. A more complete picture of a holding group of this size would also include:
- DSO by entity – are some entities collecting receivables significantly slower than others? A 15-day DSO gap between two similar entities represents a material cash flow difference. The cash conversion cycle plays out very differently across entities in the same group, and comparing them directly surfaces the outliers.
- DPO analysis – are you paying suppliers faster than necessary? Days Payable Outstanding varies significantly across entities in most holding groups, often for no strategic reason.
- Cost rigidity mapping – which cost lines are genuinely fixed, and which are variable? If revenue drops 20%, what portion of costs follow? This matters most for the loss-making entities, where the distinction between structural losses and operational ones determines whether a turnaround is realistic.
- Intercompany reconciliation – are intercompany transactions clean across all entities? Discrepancies here can distort entity-level results significantly, making profitable entities look less profitable and absorbing losses that belong elsewhere.
- Seasonal pattern analysis – are the Q1 budget overruns at Bergmann AG a seasonal pattern, or is Q1 just when a structural problem became visible? Three months of data is not enough to answer this definitively.
Each of these adds a layer. The first probe run takes hours. A complete picture of a 15-entity group takes a few weeks of iterative analysis. The findings compound – each layer surfaces new questions that the next layer answers.
Frequently Asked Questions
How long does a probe like this take to set up?
The first run, including data connection and threshold configuration, takes a few hours when the data model is clean. In practice there are almost always data quality issues in the first run – missing cost line mappings, inconsistent entity codes, AR date logic problems. Accounting for that, a working first probe usually takes one to two days.
Does this require Qlik Sense specifically?
No. The logic works in any BI tool that supports calculated flags and cross-entity comparisons. Qlik’s in-memory associative engine makes the peer median comparisons fast across large datasets, but the same analysis is possible in Power BI or well-structured SQL. The tool matters less than having a consistent data model across all entities in the group.
What data do you need to run this?
At minimum: actuals vs budget by cost line, by entity, by month. Everything else – AR aging, peer comparisons, margin cascade – builds on the same foundation with additional tables. If your ERP exports this data in a consistent format across entities, you can run the probe. Most mid-market ERP systems do.
How do you handle entities with different business models in the same group?
The peer median comparison is most useful within comparable entity types. In a group with both service-model and asset-heavy entities, you would run separate peer groups – service entities against each other, asset-heavy entities against each other. Comparing them directly would produce meaningless outlier flags. The probe needs a peer group definition, not just a list of entities.
What happened after this analysis?
The client chose to focus on the planned dashboard build and IST reporting work that was the original scope. The probe findings are documented. This is common. Organizations rarely act on findings immediately – they act when the context makes it the right priority. Having the findings documented means the starting point is there when that moment comes.
The Takeaway
A consolidated profit of CHF 180K masked CHF 270K in undetected budget overruns at the group’s most profitable entity and CHF 480K in excess costs at a loss-making one.
Neither finding required extraordinary analysis. Both required looking at the right level of detail.
The problem in most holding groups is not data access. The data exists in the ERP. The problem is that consolidated reporting is the default, and the default hides the signal. A systematic probe changes what is visible. That is all it does. But what becomes visible is often significant.
The most dangerous number in group reporting is a consolidated profit that looks fine. It is fine on average. Averages hide a lot.
The Terms Gap, CCC drift, customer profitability inversion, and inventory carrying cost are the structural patterns this audit was finding — named and quantified in the Revenue Leakage: 5 Patterns Hiding in Your Data guide, which puts EUR numbers to each one.
If you want to understand where money hides in a multi-entity structure, start with how your current reporting is structured. If the entity-level view is not the first thing you see, it is probably the last thing anyone looks at.
Where to go next:
- If you want to understand the full profitability cascade and where margin disappears between gross and net: Profitability Analysis – Where Margin Actually Goes
- If the budget overrun finding resonates and you want a framework that prevents it by design: How Zero-Based Budgeting Forces Cost Visibility
- If you want to understand how working capital metrics like DSO and DPO differ across entities in the same group: Cash Conversion Cycle – The Working Capital Trap
If you want to run the working capital calculations yourself first: the calculator tools cover DSO, DPO, DIO, and gross margin. Start with whichever metric the probe would flag first for your business.