Research Methods

The ideas that matter ignoring the pointless jargon

Michael Wood ( or


Brief notes on research methods


Checklist of traps to avoid in research


Marketing Virtuo: a case study of research in action (link to come)


Philosophical issues worth pondering (link to come)


Statistics: an important tool for research


Simple knowledge home page


I used to teach students on Masters courses (including an MBA) about research methods. The idea was to teach them how research should be done so that they were in a better position to do a small research project on their own, and to appreciate the problems with research so that they would not be taken in by misleading or incorrect conclusions in published research. I was teaching students on business courses, but very similar issues apply across a broad range of social and natural sciences (including medicine, education, genetics, etc).

I don't think I ever succeeded in achieving either of these objectives. A colleague once said that he thought that students who had not studied research methods did better research projects than those who had, and I think I agree with him. Obviously this is probably partly due to my limited teaching skills, but I don't think this is the whole story.  The way the subject is conventionally presented means that it tends to be ineffective, and may even be counter-productive.

Why should this be? I think the main reason is that a lot of it is common sense, and treating it as a technical subject which needs detailed study wastes a lot of time, and means that the common sense perspective tends to be ignored because the technical jargon makes it seem irrelevant. Furthermore, much of the jargon is so impenetrable that it's largely useless and only serves to confuse, and some aspects of the standard research methods menu are actually worse than useless: they are counter-productive.

For example, suppose we want to look at the relationship between alcohol intake and intelligence: does drink make people less intelligent, or does it enhance their intelligence? The obvious way to research this is to take a sample of people who drink and compare their intelligence with a sample of people who don't drink. There are lots of obvious problems here, but common sense is an excellent guide to the problems and how to solve them. There are some issues (like how big a sample do you need to get reliable results) where some technical expertise can help, but in general terms, common sense is an excellent starting point.

One Japanese study found that drinkers tended to be more intelligent than teetotallers. A critical evaluation of this study might focus on two questions: can we be sure that it's the drinking that causes extra intelligence (rather than, for example, intelligence causing a drinking habit because intelligent people realise that life is hard and distractions are necessary), and can we be sure the results from the sample studied can be generalised to a wider group (is the sample large enough and reasonably representative of different types of people)?

In the jargon of research methods texts, the first question is described as "internal validity", and the second as "external validity". When I googled for a definition of internal validity it came up with "Internal validity refers to how well an experiment is done, especially whether it avoids confounding (more than one possible independent variable [cause] acting at the same time). The less chance for confounding in a study, the higher its internal validity is" ... which is unlikely to be very helpful! An alternative explanation would be that it's about the validity of inferences about what's happening within - internal to - the sample: about whether drinking causes intelligence or vice versa. External validity is a bit more obvious : this is about whether the results can be generalised to people external to the sample.

In my view these terms add nothing at all to an understanding of the problems and their resolution: instead they are likely to distract and confuse, and the use of the jargon may even give an illusion of having solved the problem. Such jargon is worse than useless; it should be ignored.

In practice the situation is even worse because some nuggets of supposed wisdom from the manual may be silly or irrelevant. The notions of positivism, social constructivism, phenomenology are confused and, at best, irrelevant to understanding how to do good research (Are ‘Qualitative’ and ‘Quantitative’ useful terms for describing research?). Many statistical ideas - like reliability coefficients and significance levels - just answer peripheral questions but tend to be treated as the main result. Historically these ideas were invented by specialists in philosophy or statistics or whatever, but they tend to be adopted uncritically by proponents of research methods scrabbling round for established wisdom to give academic respectability to their efforts.

The starting point should always be common sense. Only then should you dip into the technical manuals, and only for the bits which seem relevant. Brief notes on research methods is intended as such a brief, common sense guide to research methods. I've tried to avoid all concepts and jargon which are not helpful, and simplify everything as much as possible.

On the other hand, reminders about what to check for, and some suggestions about approaches to specific problems are in order. Although the question of whether alcohol causes intelligence, or intelligence causes a drinking habit, is certainly one that can be posed and analysed within a common sense framework, it  may not occur to some people to question which factor is the cause and which is the effect. It is easy to jump to the wrong conclusions.  For this reason I have produced a checklist of traps to avoid in research.