Health Enhancement Research Organization and its most recent outcomes measurement report get something far short of a ticker tape parade in
Methodologies are the focus of Lewis’s
Lewis’s point is well taken, nevertheless, since the multiple methodologies invite “shopping” for the best results. One by one, Lewis examines the methods and their flaws. Comparing cost trends to industry peers is problematic, for example, since one cannot know exactly what one’s peers are doing behind corporate doors. And peers may be hiring the same vendors and using the same tactics.
Also see:
Rather than lampooning the mistakes of wellness vendors and consultants, Lewis could do a greater community service by focusing on the flaws of the measures themselves. Granted, some of the flaws stem from the vendors’ self-interest but many have nothing to do with who is punching the calculator. For example, formulating an expected cost trend is fraught with peril since “expected” can be defined in any number of ways, including a dart board. This is more useful to know than StayWell’s or Mercer’s antics.
Comparing wellness program participants to nonparticipants is a widely repeated mistake. It continues because the audience (employers, HR staff, insurance brokers) is not schooled in research study design. The vendors may or may not know better, but as the saying goes, if something isn’t broke … I applaud Lewis and others who point out that it is indeed broken and needs fixing.
Kudos to Lewis for continuing to bang the drum for valid measures. Let’s get past naming names and focus how we can reasonably and accurately measure results.
Linda K. Riddell is a principal at Health Economy, LLC. She can be contacted at LRiddell@HealthEconomy.net.