When business researchers analyze data, they are often based on assumptions to understand their findings. But, like any other person, they can encounter serious problems if those assumptions turn out to be wrong, which can happen more frequently than they believe in finance.
That is what we discovered in a recent study that analyzed financial data of approximately one thousand important US companies.
One of the most common assumptions in data analysis is that the figures will follow a normal distribution, a central statistical concept known as the Campana curve. If you have ever observed a graph of people’s stature, you will have seen this curve: most be grouped near the center, with less at the ends. It is symmetrical and predictable, it is often taken for granted in the investigation.
But what happens when real world data does not follow that defined curve?
We are teachers who study businesses, and in our new study we analyze financial data from US companies that are quoted in the stock market: aspects such as the market value of the company, the market share, the total assets and other similar financial measures and ratios. Researchers usually analyze this type of data to understand how companies work and make decisions.
We discover that these figures often do not follow the bell curve. In some cases, we find extreme atypical values, such as some large companies thousands of times larger than other smaller ones. We also observe “biased right” distributions, which means that the data is grouped on the left side of the graph.
In other words, the values are found at the lower end, but there are some really high figures that stretch the average up. This makes sense, since in many cases financial metrics can only be positive; For example, you will not find a company with a negative number of employees.
We recommend: these are the risks of trusting your personal finances to influencers
How errors in finance serve as snowball
Why is it important?
If business researchers are based on erroneous assumptions, their conclusions (what is driving the value of the company, for example) could be wrong. These errors may have external repercussions and influence business decisions, investor strategies or even public policies.
Let’s take as an example the profitability of actions. If a study assumes that these yields are normally distributed, but in reality they are biased or full of atypical values, the results could be distorted. Investors who wish to use that research could be deceived.
Researchers know that their work has real consequences, so they often dedicate years to perfecting a study, collecting comments and reviewing the article before it is reviewed by pairs and prepared for publication. But if they do not check if finance data is normally distributed, being able to overlook a serious failure. This can harm even well -designed studies.
In view of this, we encourage researchers to ask themselves: do I understand the statistical methods I use? Am I checking my assumptions or simply assuming that they are correct?
What is not yet known
Despite the importance of data assumptions, many studies do not present normality evidence. As a result, it is not clear how many findings in financial and accounting research are based on little solid statistical bases. We need more work to understand the frequency of these problems and encourage best practices to evaluate and correct them.
While not all researchers need to be statistical, it would be prudent for those who use data to ask: how normal this is?
*Brian Blank He is an associate professor of finance at Mississippi State University; Gary F. Templeton He is a professor of managerial information systems at the University of Virginia Occidental.
This text was originally published in The Conversation
Do you like to inform yourself for Google News? Follow our showcase to have the best stories