Inventory distortion - the gap between actual and recorded stock - costs the global retail industry 1.77 trillion US dollars per year (IHL Group/Retail TouchPoints), split into 1.2 trillion in out-of-stocks and 562 billion in overstock. Anyone who only syncs ERP, shop and marketplaces hourly or daily loses revenue to faster competitors and risks overselling. Multi-channel sync reduces stockouts by up to 40% (Unicommerce), and 66% of consumers lose trust after an overselling incident (SyncAuction Survey). This article shows how event-driven architecture, webhooks and reservation logic keep inventory in sync between ERP, Shopware and marketplaces in under five seconds.

Multi-Channel Inventory Sync via Event BusERP stock events flow in real time to all sales channelsERP / WMS- Stock levels- Pricing- Order status- Reservations- Returns- ReplenishmentStock eventsEvent BusMessage brokerstock.updated v.142order.reserved v.141stock.adjusted v.140price.changed v.139Idempotency keys, versioning<5sOnline shopShopware storefrontAmazonSP-API webhookeBayPolling fallbackOTTO MarketWebhook + RESTLatency comparison (ERP to channel)Polling 15 min5-15 minWebhook<30 msEvent-driven<5 sSources: IHL Group, Unicommerce, KPMG, Svix, Nventory

What inventory distortion really costs

Online shops average an 8% out-of-stock rate while simultaneously holding 20-30% overstock (Finaloop) - waste in both directions. The 2026 industry standard is 99.8% inventory accuracy (Crazyvendor/Retail Exec), while NetSuite benchmarks show 97-99% for world-class players and only 90-95% on average. An Unleashed Software study (2024) reports that 58% of D2C manufacturers sit below 80% accuracy. Top-quartile retailers reach 99.9% through RFID, AI forecasting and real-time sync (Omniful).

Marketplace reality

Even 2% stockout rate triggers permanent visibility loss on Amazon and Walmart (Amazon Business Blog). At 500,000 EUR annual revenue, missing sync produces 30-50 overselling cases per year (3-5% of transactions, IT Group). An ODR > 1% triggers Amazon account suspension (My Amazon Guy/SellerApp) - putting the entire marketplace integration at risk.

Within your own shop, inventory distortion also affects central KPIs: conversion rate, average order value and repeat purchase rate drop measurably as soon as products show as available but have to be cancelled shortly after. Structured checkout optimization only helps when the displayed inventory matches reality - otherwise operational weaknesses cancel out technical optimizations. The combination of reliable sync and clean frontend is therefore not a luxury but a precondition for scalable e-commerce architecture.

Manual inventory management binds 15-25 hours per week per shop (ShipBob), while KPMG (2025) reports that 40% of consumers rate stockouts as their main concern and 40% switch to a competitor immediately. Anyone working with basic tools at a 15-30-minute sync delay (Veeqo) systematically loses revenue to providers with real-time architecture.

The cost structure of inventory distortion splits into three areas: direct revenue loss from unsellable items, marketplace penalty costs, and hidden follow-on costs from customer churn and review damage. The last point is often underestimated: a single overselling case with subsequent cancellation typically generates negative reviews that depress conversion on the product page for weeks. In B2B contexts, contractual service levels add to the picture, potentially triggering flat fees or contractual penalties on delivery delays - here, real-time sync is not just an efficiency question but part of risk management.

Webhooks vs polling: the architectural decision

Polling queries the source system at fixed intervals - average latency lands at half the interval (Svix/Merge.dev). With 30-second polling and 10,000 active connections, that produces 28.8 million requests per day, of which 99% return empty (Svix). Webhooks instead push notifications with latencies below 30 ms (e.g. AWS SNS) and avoid idle traffic entirely. Event-driven architectures with a message broker typically reach end-to-end latency below five seconds (Nventory).

The architectural decision depends on more than latency: who controls the source system? Does the marketplace API even support webhooks? How are outages and replays handled? Polling is trivial to implement and robust against short network outages, since the next interval catches up missed state. Webhooks require a reliable receiver infrastructure with persistence, retry logic and dead-letter handling. Event-driven architecture combines the strengths of both worlds: push latency like webhooks, replay capability like polling, plus horizontal scaling through consumer groups.

CriterionPolling (15 min)WebhookEvent-Driven (Bus)
Latency ERP to channel5-15 minutesunder 30 msunder 5 seconds
Load on source systemhigh (99% empty)minimalminimal
Replay/recoverymanuallimitedcomplete
Channel scalinglinearly expensivemoderatehorizontal
Conflict resolutionweakselectiveversion-based
Marketplaces without webhookstandardnot possiblepolling fallback

In practice, polling remains essential as a fallback for marketplaces without webhook APIs (e.g. eBay), while shop, Amazon SP-API and OTTO Market integrate via webhooks. Shopware projects switching to real-time also benefit from the PHP 8.5 performance gains, because background event listeners block fewer requests.

Equally important is endpoint security: HMAC signature verification is standard, complemented by IP allowlisting for critical sources. Replay attacks are mitigated through timestamp tolerances (typically 5 minutes) and nonce tracking. For sensitive data - prices or customer information - payload encryption is additionally recommended. Custom programming of the endpoints lets these security layers integrate into application logic without workarounds.

Event-driven pattern with a message broker

A message broker decouples producers (ERP, WMS) from consumers (shop, marketplaces). Established options include Apache Kafka, RabbitMQ and AWS SNS/SQS - selection depends on volume, ordering requirements and hosting strategy. Stock changes are published as events, and each channel subscriber consumes the topics relevant to it.

event-publisher.js
// Example: publish a stock event (RabbitMQ-style pseudocode)
async function publishStockEvent(sku, newQty, source) {
  const event = {
    id: crypto.randomUUID(), // unique event ID
    type: 'stock.updated',
    version: await getNextVersion(sku),
    sku: sku,
    quantity: newQty,
    source: source, // 'erp', 'shop', 'amazon'
    timestamp: new Date().toISOString()
  };

  await broker.publish('inventory.events', event, {
    persistent: true,
    headers: { 'idempotency-key': event.id }
  });
}

// Example: consumer with at-least-once delivery
broker.consume('inventory.events', async (msg) => {
  const event = JSON.parse(msg.content);
  if (await wasProcessed(event.id)) return msg.ack();
  if (event.version <= await getCurrentVersion(event.sku)) return msg.ack();

  await applyToChannel(event);
  await markProcessed(event.id);
  msg.ack();
});

In a Shopware setup, the broker connects via middleware that publishes ERP stocks into topics like stock.updated, price.changed, order.reserved. Anyone migrating from a historically grown point-to-point architecture finds the transition path to bus-based design described in our middleware integration guide.

Topic design is the most important architectural lever: overly coarse topics generate unnecessary traffic for every subscriber, while overly fine topics make routing and operations complex. A two-tier hierarchy has proven effective - one main topic per domain (inventory, pricing, orders) plus routing keys for sub-domains like warehouse, channel or product group. This way, an Amazon adapter only consumes Amazon-relevant stock events while a reporting service can process every event in parallel. During peak loads - Black Friday campaigns or flash sales - the broker uses backpressure to ensure producers throttle their throughput to consumer capacity rather than flooding databases.

Idempotency keys: handle duplicate events safely

Webhooks and message brokers deliver at-least-once in practice - the same event can arrive multiple times. Without idempotency protection, this produces duplicate bookings, wrong stocks and race conditions between parallel consumers. The standard pattern: every event carries a unique idempotency-key, which the receiver stores for 24-72 hours.

Idempotency state is typically stored in a fast key-value store like Redis or a dedicated status table. Atomic check with lock acquisition is critical - otherwise two parallel consumers may both start processing before either sets the status. In high-frequency scenarios, an additional bloom filter as a pre-check helps catch most duplicate requests without expensive storage hits. The TTL for idempotency keys is aligned with the producer's maximum retry window - 24 hours is standard, with up to 72 hours for critical marketplace events.

Beyond idempotency, exactly-once semantics cannot truly be guaranteed at application level - it can only be approximated through the combination of idempotency keys, version management and transactional writes. For critical operations such as stock reductions, the outbox pattern is additionally recommended: local changes are persisted together with outbound events in a single database transaction, and a background process reliably publishes the events to the bus afterwards. This keeps the system consistent even during broker outages, because no change can alter data without a corresponding event.

IdempotentStockHandler.php
<?php
class IdempotentStockHandler
{
    public function handle(StockEvent $event): void
    {
        $lockKey = 'stock:event:' . $event->id;

        // 1. Atomic SET-NX with TTL (Redis pattern)
        $acquired = $this->redis->set(
            $lockKey,
            json_encode(['status' => 'processing']),
            ['NX', 'EX' => 86400]
        );

        if (!$acquired) {
            $this->logger->info('Duplicate event skipped', ['id' => $event->id]);
            return;
        }

        // 2. Version check (conflict resolution)
        $current = $this->repo->getVersion($event->sku);
        if ($event->version <= $current) {
            $this->logger->info('Stale event ignored', ['sku' => $event->sku]);
            return;
        }

        // 3. Apply + version update in transaction
        $this->repo->applyStock($event->sku, $event->quantity, $event->version);
    }
}

Conflict resolution: who wins?

When stock changes arrive simultaneously from multiple sources (ERP goods receipt vs. shop order vs. Amazon adjustment), the system needs a strategy for which change applies. Three established patterns:

StrategyLast-Write-WinsVersion vectorVector clocks
Complexitylowmediumhigh
Accuracyoften wrongdeterministicfull causality
Storage overheadminimal1 counter per SKUcounter per source
Suitable forsingle sourcemulti-channel syncmulti-region active-active
Example useread-only cachesstandard inventory syncdistributed warehouses

For most multi-channel setups with one leading ERP, a version vector per SKU is sufficient: every stock mutation increments the counter, and consumers only accept events with a higher version. For distributed warehouses or multi-region setups - for example when SAP Business One manages two locations in parallel - vector clocks become relevant to distinguish true concurrency from causal changes.

Last-write-wins looks tempting in its simplicity, but fails on clock drift and network latency: two events milliseconds apart can be merged incorrectly depending on order, leading to inventory accounting errors. Version vectors solve this deterministically because order is determined not by timestamps but by monotonically increasing version numbers. For genuinely concurrent edge cases - two locations booking simultaneously - an additional conflict-resolution function is recommended that either merges causally separated changes (sum strategy) or routes them to a reconciliation queue for manual review.

Reservation logic in checkout

Reservation logic prevents overselling by temporarily blocking inventory at checkout start, before the order is finalized. There are two main approaches, almost always combined in practice to keep conversion rate high and overselling risk low:

  • Soft reservation (typically 10-15 minutes checkout timeout): stock is marked as reserved but not physically locked. On timeout it automatically returns. Suitable for fast checkout and low conversion friction.
  • Hard reservation (several hours to days): stock is locked in the ERP until payment is confirmed. Suitable for B2B orders on invoice, B2B self-service portals and high-priced items.
  • Multi-tier reservation: soft reservation in shop, hard reservation only after payment authorization - the standard in professional e-commerce.
  • Channel-pool reservation: a separate reservation pool per channel (e.g. 80% shop, 15% Amazon, 5% buffer) to prevent marketplace overselling.
  • Safety stock logic with dynamic reservation reduces stockouts by 25-40% according to Opensend.
  • Auto-release with heartbeat mechanism: reservations expire when checkout becomes inactive - including a release event back to the bus.

From an architectural perspective, reservation logic is a standalone service sitting between frontend and ERP, owning per-SKU stock authority. It reads current stock events from the bus and writes reservation events back, which ERP and marketplace adapters then interpret as availability-reducing. Separating physical warehouse stock from reserved stock is critical - the same model also fits B2B e-commerce, where customer-group-specific reservations, minimum order quantities and framework-contract logic are added.

Marketplace specifics: Amazon, eBay, OTTO

Amazon SP-API

Webhook notifications via SNS/SQS, FBA and FBM stocks separated, ODR-critical. More in our Amazon integration guide.

eBay Trading API

No classic webhook model, so polling fallback at 1-2 minute intervals. Notifications only for sales, active stock push via ReviseInventoryStatus.

OTTO Market

REST API with webhook subscription for order events, stock via PUT on SKU endpoint. Delivery time service levels affect marketplace visibility.

Kaufland & idealo

REST push, no webhook. Stocks pushed in batches - modeled as a polling fallback in the sync layer.

Shopware Storefront

Native stock API via Admin API, plus event bus integration through plugin. Prepares for Shopware 7 migration on CE architecture.

Custom middleware

Channel adapters with a unified internal data model - reduces complexity and enables future channel additions in days rather than weeks.

Safety stock and per-channel buffers

Even seamless real-time sync has physical limits: a few seconds always pass between ERP booking and marketplace update, during which parallel orders can arrive. The answer is channel buffers - a defined share of total stock is reserved per channel, with critical marketplaces receiving additional safety reserves. Best practice is dynamic buffer logic that scales with sales velocity and channel risk: high-frequency marketplaces with hard penalties (Amazon ODR) get a larger safety buffer than low-volume channels. For B2B setups with make-to-order, a Dynamics 365 Business Central integration with available-to-promise logic helps include free production capacity in the available stock.

Buffer strategies can be differentiated by product class: A-items with high turnover typically receive tight buffers, because reposition is fast and predictable. C-items with long replenishment lead time get a percentage safety stock instead. Seasonal items are ideally fitted with time-dependent buffers that automatically rise before peak phases (Black Friday, Christmas). Combined with SAP Business One or Microsoft Dynamics, this logic can be derived directly from forecast data and implemented in the sync layer as a dynamic factor per channel.

An often-overlooked aspect is behavior at boundaries: what happens when stock drops to zero, but a webhook for a reservation arrives in the same instant? What if a marketplace pauses a product listing due to a delivery issue while the sync layer keeps reporting stock? Robust buffer management addresses these edge cases through explicit state models per SKU - with statuses such as available, reserved, oversold, paused, discontinued - and clearly defined transitions between them. This state machine sits between frontend, ERP and marketplace, ensuring business rules apply cleanly even on rare events.

Monitoring and alerting for sync errors

A real-time sync system without monitoring is a black box - faults only become visible when marketplace penalties or cancellations arrive. Best practice is therefore a three-tier monitoring concept: technical monitoring of infrastructure (broker health, queue depth, lag), functional monitoring of sync quality (drift between ERP and channel, reservation heatmap, overselling counter), and business monitoring of KPIs (conversion on availability displays, cancellation rate, marketplace ODR). All three layers should converge in a central dashboard and automatically generate alerts on threshold breaches, backed by clear runbooks and escalation paths.

  • Event lag per channel: time between event.published and channel.applied - alert above 30 seconds
  • Dead-letter queue volume: events that failed multiple times - daily review
  • Version drift: SKUs with deviation between ERP and channel side - automated reconciliation
  • Webhook delivery rate: per-channel delivery rate with threshold above 99.5%
  • Reservation abandonment rate: share of expired reservations - signals checkout problems
  • Overselling counter: orders resulting in negative stock - should trend to zero
  • Marketplace ODR: order-defect rate on Amazon, OTTO, Kaufland with early warning above 0.5%
  • End-to-end test with synthetic stock mutations every hour - combined with shop monitoring

Migration roadmap: from polling to event-driven

  1. Phase 1 - assessment (2-3 weeks): document current sync architecture, measure stockout rate, capture marketplace ODR values, record latency baseline.
  2. Phase 2 - introduce message broker (3-4 weeks): broker selection based on volume and ordering requirements, topic design, producers in ERP/WMS, running in parallel to existing sync.
  3. Phase 3 - channel adapters (4-8 weeks): stepwise channel migration - shop first, then Amazon, then OTTO and eBay (with polling fallback). Activate idempotency protection and version management.
  4. Phase 4 - reservation logic and buffers (2-3 weeks): introduce soft and hard reservation, configure channel buffer pools, activate multi-tier locks for checkout.
  5. Phase 5 - monitoring, cutover, optimization (ongoing): dashboards, alert thresholds, dead-letter handling, reconciliation jobs, cutover from polling to event-driven, continuous optimization. For JTL-Wawi setups see our JTL integration article.

Anyone keeping shop performance in scope simultaneously achieves measurable conversion gains through hand-in-hand optimization of sync latency and frontend speed. Combining this work with Lighthouse 100 optimization is therefore a fixed part of the roadmap in nearly all our projects. Other areas benefit too: server-side tracking can hook into the event bus, subscription commerce reuses the same reservation patterns for recurring deliveries, and JSON-LD schema benefits from reliable availability data in product structures.

A typical risk in migration projects is the big-bang approach - switching all channels to the new system at once. Practice shows: a parallel dual-run over two to four weeks per channel significantly reduces migration risk. The polling sync remains in write mode, while the event-driven path only reads and is validated against real data. Only after successful acceptance is the cutover activated per channel, the old sync path is demoted to a read-only backup and decommissioned after 30 days.

Sources and studies

This article draws on data from: IHL Group, Retail TouchPoints, Unicommerce, NetSuite, Crazyvendor, Unleashed Software (2024), Omniful, Amazon Business Blog, Finaloop, IT Group, Svix, Merge.dev, Nventory, KPMG (2025), Opensend, Veeqo, ShipBob, My Amazon Guy, SellerApp and SyncAuction. Numbers may vary by survey period and industry.

The 2026 standard is typically below five seconds end-to-end (Nventory). 15-minute polling is generally too slow for competitive marketplaces, because as little as 2% stockout rate can trigger visibility loss on Amazon (Amazon Business). We analyze your current sync architecture and recommend a suitable latency class per channel.

Webhooks typically suffice for simple single-channel setups. Once three or more channels are synced in parallel, replay capability is needed, or the source system cannot connect every receiver directly, a message broker is usually the more robust choice. The decision depends on volume, ordering requirements and hosting strategy.

When checkout starts, stock is marked for a defined period (typically 10-15 minutes soft reservation) and becomes unavailable to parallel orders. After payment authorization the hard reservation lands in the ERP. Combined with channel buffers, overselling typically tends toward zero - though residual risk from physical latencies cannot be guaranteed away.

Marketplaces without webhook push are usually integrated via polling fallback at 1-2 minute intervals, with active stock push running in parallel via the trading API. The architecture typically uses a channel adapter that internally acts as an event producer and abstracts the polling layer in front of the bus.

The 2026 industry standard is 99.8% accuracy (Crazyvendor/Retail Exec), world-class players reach 99.9% (Omniful), while the NetSuite average sits at 90-95%. With event-driven sync, idempotency protection and clean reservation logic, the upper end of this range is typically achievable.

In our experience, a complete migration takes between three and five months across five phases (see roadmap), depending on channel count and ERP complexity. We recommend a stepwise channel-by-channel cutover with parallel operation of both sync paths to minimize risk during live operations. Concrete effort estimates can be discussed in a scoping call.