There was one presentation made at the most recent Aftermarket eForum that may have slipped by many of those in attendance. Its lack of notoriety might have been attributed to the short duration of the presentation, which was less than 15 minutes. It may have been because it was made during the luncheon on the first day of the meeting. (We all know that it’s never easy for a speaker to compete with cheesecake.) Or, it might have even been because it was about a subject most people love to ignore: the subject of data. Nevertheless, some, myself included, might argue that it was the most significant presentation of the eForum.
The presentation I am referring to was made by Scott Luckett, the Automotive Aftermarket Industry Association (AAIA) vice president of information technology, who shared the results of a study made by his association on the general state of data synchronization between trading partners in the aftermarket. The study involved six pairs of trading partners and product lines that each pair had in common. The trading partners’ data was matched on a relatively simple set of data fields that would be essential for electronic trading, including part number, U.P.C. code, reseller pricing, product descriptions, unit of measure and minimum order quantity. Those who participated in the study included leading manufacturers Dana Brake Parts, Clevite Engine Parts, Delphi, Federal-Mogul, Gates Corporation, Global Accessories, and their reseller trading partners, Advance Auto Parts, CARQUEST, CSK Auto, NAPA, O’Reilly and Reliable.
Research results suggest that trading partners in our industry are not in sync on a substantial portion of fundamental transactional data. In fact, an astonishing 51 percent of the items in the manufacturers’ files had no recognizable equivalent in the files their trading partners submitted. This wide degree of variance in item data between buyers and sellers is causing significant financial and supply chain problems in the industry.
Based on the findings, the study concluded that product and pricing inaccuracies are costing manufacturers and distributors 1 percent and .75 percent of sales, respectively. Similar studies in other industries support these findings. For example, an AT Kearney study for the grocery industry last year concluded that 30 percent of items in retailer item files were in error, and that the consumer packaged goods industry alone is losing $40 billion in sales every year due to poor product data. AAIA estimates that the automotive aftermarket’s data inaccuracies annually cost us in the neighborhood of $1.7 billion.
Cause & effect
We in the aftermarket tend to equate any discussion about data with electronic catalog application data (part numbers as applied to a year, make and model) and not with product attribute data like units of measure, minimum order quantity or U.P.C. codes. We are so obsessed and focused on application data that we overlook product attribute data, and it is costing us in untold ways.
How can something as seemingly simple and innocuous as data inaccuracies cost that kind of money? It happens in a myriad ways, and I want to share a few concrete examples.
- Mismatched minimum order quantities. A recent data synchronization exercise between a major filter supplier and a very system-savvy distributor was revealing. About 30 percent of the 3,500 SKUs stocked by the distributor had a mismatch between the buyer and seller in the minimum order quantity (MOQ) field. That means on nearly one-third of the lines, the buyer may be receiving more than he ordered, or he may be ordering more than he needs. The buyer made the amusing observation that when this discrepancy was uncovered, he spent about a half an hour trying to decide which was worse, buying more than was required or being shipped more than was ordered.
- Old data lingering on legacy systems. After another data matching exercise between trading partners, a distributor who had bragged extensively about how good their data was, discovered that there was significant duplication of part numbers in their system. It seems their system structure allowed multiple entry points for new part numbers (at each of several branch locations). That had resulted in scores of part numbers being duplicated one or more times throughout the system. Those duplications had not been found in their own data scrubbing exercises because the part numbers were different and unique. Some differed only by an alpha prefix or suffix used to identify the vendor. In other cases, superseded numbers had not been purged. One part had actually been duplicated six times, two times as an A class number! That literally meant the distributor was maintaining stock and safety stock on the same part six times over!
- Invoice reconciliation and associated deductions. One of the great time bandits to trading partners is reconciling invoices and dealing with the deductions and adjustments resulting from that process. Talk to any field sales person and they will tell you that between 15 percent and 20 percent of their time is spent on the unproductive activity of straightening out these administrative nightmares. When data is matched between trading partners, many if not most of these problems are eliminated.
- Outdated popularity codes. An area that usually turns up the greatest number of data mismatches is popularity codes. Perhaps trading partners are so focused on data fields deemed to be “more important,” like price and application, that pop codes get the short end of the stick. Clearly, this data can help a reseller understand anticipated volume based on the supplier’s historical perspective. It also can help him anticipate changes in demand as specific part numbers mature. Yet, consistently this is some of the worst data in terms of buyer/seller match. And we wonder why we always seem to have too much of the wrong stuff in our inventories!
Correcting bad data
Perhaps even more disappointing than those four examples is the amount of time that many manufacturers spend manually intervening in their electronic order and invoicing systems to correct bad data that has been transmitted to them with great efficiency, usually by their more progressive customers.
I recently asked the general manager of a prominent aftermarket company about data compatibility issues with his customers, specifically as it related to EDI orders.
“We don’t have any,” he replied bluntly.
I was impressed, but incredulous…so I pressed him, “Do you mean to say that the EDI orders from customers come in and drop directly on your order entry system, and you never have orders kicked out because your system can’t recognize something in the data stream?”
“Oh, no,” he replied, “we could never have EDI orders drop directly onto the system. We have somebody open them and check them first.” I guess that’s why he never has a data compatibility problem.
Such expensive human intervention to fix data problems is even more obvious in the second war story. A senior manager of another major automotive parts supplier told me how shocked and outraged he was when he discovered the time (and subsequent cost) of his information technology (IT) department in repairing data errors. He discovered the problem when he spent some time in the IT area to, as he put it, better understand what “those people do down there.” What he discovered was a remarkable volume of IT people literally “mapping and translating” to adjust for and correct incoming data errors. He went on to express his frustration that they hadn’t gotten some less expensive administrative help to make the fixes! I couldn’t help believing that the relatively painless, upstream alternative of data synchronization wouldn’t have eliminated the problem all together.
All this reminded me of a story I heard 20 years ago. A friend was complaining about what he felt was an absurd piece of pending legislation that would require furniture manufacturers to apply an expensive fire retardant chemical to prevent careless cigarette smokers from setting their sofas on fire. When I questioned him why that was so absurd, he told me tobacco companies applied a chemical to tobacco that promoted continuous burning so cigarettes wouldn’t go out sitting in the ashtray. “Wouldn’t it make more sense to not treat the cigarettes?” he asked. Thinking about the time and money the two companies spend adapting to bad data I thought of my friend’s story and asked myself — doesn’t it make more sense to fix the data?
Support vs. compliance
The solution starts with more manufacturers getting their data compliant with the industry’s Product Information Exchange Standards (PIES). It continues when they develop a data matching process that facilitates initial and ongoing data synchronization with their trading partners. Finally, the circuit is closed when the resellers make the necessary changes to bring their data into compliance and synchronization with their suppliers. Although this is a relatively simple undertaking, it is labor intensive and the practice has not become widespread as quickly as some had hoped.
What we have seen is many manufacturers, program groups and system providers voicing their support for the standards. What we have not seen is widespread compliance. For standards to work, there must be support and compliance. There is a big difference between conceptually agreeing and investing the time, money and energy to comply. We need to find a way to get more people actively committed to the data standards in the aftermarket. We must create peer pressure on those who voice support but are not committed to putting standards into practice.
I truly believe a day is coming when buyers will make line decisions based not exclusively on product performance characteristics, but on data management capabilities as well. If you are familiar with W.W. Grainger’s impressive network of e-enabled warehouses and Web-based commerce activities, you’re aware of that company’s commitment to data and e-commerce. I heard a buyer from Grainger speak a couple years ago at an industrial e-forum, and he made the statement, “I am no longer just buying products, I am buying product information. If vendors have bad data, they run the risk of being replaced by those who can provide good data.”
Converting support to commitment, compliance
There is an idea gaining favor in certain aftermarket circles that could help encourage more compliance with industry data standards. The concept is to have AAIA begin a data certification program that independently reviews data quality and compliance with industry standards. The concept is very much like complying with ISO process standards for quality. Proponents say it would work like this. First the industry trade associations, working with the Aftermarket Council on Electronic Commerce (ACEC), would agree on a certification system for data compliance. For instance, the criteria might be as simple as beginner, intermediate and advanced level. That is, the ACEC might decide the first level of compliance for a manufacturer is to have their basic transactional data (like part numbers, price, U.P.C. codes, units of measure, etc.) fully populated in PIES format for every SKU in their line. Then similar criteria would be established for intermediate and advanced levels of compliance. Those would be published and distributed for the review and participation of all.
At the appropriate time, manufacturers would be invited to submit their data for review. Most likely a third party contractor under the auspices of AAIA would provide that function. The data would be evaluated and a report card would be issued only to the party who requested the review. Companies that pass would be allowed to claim “official compliance with AAIA standards,” based on their level of certification. Companies that fail would be provided a report detailing exactly what is required to bring their data into compliance. In the same way that prospective college students can take and retake their entrance exams until they achieve the score they desire, so too could manufacturers resubmit data until they become certified. The concept would not be to punish anyone, but to officially recognize and reward those who get their data act together.
The idea is sound and I hope the industry moves with haste to refine and implement it. As I have railed repeatedly, the implantation of supply chain automation is the aftermarket’s last, best chance to bring some semblance of sanity and profitability back to our business. And the existence of robust, standardized and synchronized data is the ante for that poker game.
Perhaps more importantly, the implementation of data certification would help address a rather insidious phenomenon that is negatively impacting the progress of supply chain technology in the aftermarket. I’m referring to the practice of aftermarket companies (be they manufacturers, program groups or system providers) saying publicly they support industry efforts for standards, but not taking any action to adopt them. Having a graphic bug that could be displayed by compliant companies, in a similar fashion to “Be Car Care Aware” would be a way to showcase compliers versus supporters. And that is the sort of peer pressure that will get all of us down the data highway a lot faster.