Thursday, September 09, 2004

Checkbox approach to CG doesn't work

Wharton accounting professors David Larcker, Irem Tuna and Scott Richardson say the check-box approach to CG doesn't work, because companies and their situations are too diverse. "The recipe book is big, and there's a different recipe for each company," Richardson notes. Even worse, the professors say, are consultants and ratings services that use formulas - which they typically refuse to reveal - to boil down a company's CG to a single number or grade.

Yep.

"Lots of people are coming up with G. scorecards," Larcker explains. "They're coming up with best practices and selling this stuff. As far as we can tell, there's no evidence that those scorecards map into better C. performance or better behavior by managers."

Larcker, Tuna and Richardson tried to create a magic formula of their own. But no matter how they sliced and diced G. data (consisting of more than 30 individual measures) on more than 2,100 public companies, they couldn't find one. The three professors have released their findings in a working paper titled, "Does CG Really Matter?" The title is intentionally provocative. They do think CG matters, but after puzzling over reams of company numbers, they are not confident that anyone can measure whether one firm's G. is better than another's at least, not by using typical metrics.

As they say in their paper, "Our overall conclusion is that the typical structural indicators of CG used in academic research and institutional rating services have a very limited ability to explain managerial decisions and firm valuation."