NonTocareLeTete wrote:Don't have much for you in terms of canned material, but could you let your students know that:
1.) correlation does not equal causation (how just about every writer of every paper I edit missed the first day of research methods is a mystery to me)
2.) EVERY assertion you make in your research needs to be backed up by data. Personal opinions should be minimal and should be stated as such, preferably in the discussion section.
3.) Before you use a term in the paper, throw it into a google scholar search, with quotes around it. If it doesn't come up, it's not a term. Don't use it and don't make terms up, even if they sound really smart, unless you can define them.
4.) most experimental designs need a control group. for example, if you are trying to publish research about your kick ass method for teaching English to university students, a statement like "The teaching method is successful because 93% of the students stated that they plan to continue studying English." is worthless unless you have a benchmark to compare it to.
Back I go, like a good little girl, to editing this 'research' paper on semantic memory. I'm sure I'll have plenty more suggestions for you before the night is through...
You are attacking the very basis of a Taiwanese academic career.
That's a GOOD THING of course, but more fun where there actually is one, at least until you get tired of banging your head against the wall of indifference.
My impression is that research students
just want to know which buttons to press in SPSS.
Established Taiwanese academics I've encountered mostly have not the slightest interest in good science either. They just want to get published, and, sadly, your basic principles don't seem to be very consistently enforced by journal editors either.
I spent A LOT of time telling people that "correlation NOT=causation" when a research assistant/writer at NCKU, to zero effect, but that was in Management (i.e. Social) Science, which makes a lot of use of Structural Equation Modelling, a method which (sometimes at least) seems to be "respectably" represented as "proving" causation.
I never believed that (I knew more statistics than the people conducting the research, but that doesn't say much) or really understood it, and I don't remember much about it now.
Here's a "Recipe for Publication Success in Management Research"
rant that I wrote a long time ago. It goes on a bit, but believe me, the provocation was severe. The paper I'm being nasty about was written by a student that got me a fairly expensive watch for Xmas, so yes, I am a bastard.EDIT: I believe she went on to a post-doc research fellowship, though, so I didn't do any lasting damage to her career. ENDEDIT
I wonder if it could form a basis for a "practical" (ie cynical) course
Scan for gist, or pass on. Don't, whatever you do, read the whole thing.
"This is an excellent paper of its type. Unfortunately its a fairly horrible type. There is a great deal of work in it, diligently carried out, with a lot of complex and obscure statistics.
It is a good example of the application of my "Recipe for research publication success in management studies" which follows:
Start with two or more concepts that are not distinct
and ideally are two different descriptions of the same
, for example : "Marketing Channel Form" and "network structure relationship" in this paper.
Define these in such a way as to obscure the fact that they are the same thing
(or ideally, dont define them at all).
You then hypothesise a relationship between them. Since they are the same thing
, there is an absolute certainty that this hypothesis is correct.
At some point (if not up front) you make the explicit or tacit assumption that the relationship is causal, i.e. one of them is a dependent variable, and that you can prove this statistically.
Since they are the same thing
, there is an absolute certainty that this is NOT correct, and (almost) everyone knows it can't in any case be proved statistically, but this doesn't seem to matter.
You now construct a model to relate these two things, and introduce variables to measure them as constructs. These variables should (of course) NOT be defined, and it helps if they make little sense, overlap with each other, and/or contain two or more unrelated concepts
e.g "Level of coordination and cooperation and ability of holding market" (yes, thats a SINGLE variable)
Note that the model should be as complex as possible, preferably incomprehensible. Occams Razor has no place here.
A complex model has two benefits:-
(a) Some people may think its clever.
(b) It'll be a lot of work for the people who think its rubbish to prove it, so they
probably won't bother.
It helps if you introduce additional elements into the model, (preferably trendy ones such as e-commerce) and relate them by obscure statistical techniques measuring interference effects.
You now measure perception
of these undefined and possibly meaningless variables in a questionnaire. (Ignore the obvious but inconvenient distinction between perception and reality.)DO NOT
under any circumstances, use REAL DATA (such as sales figures or market share) even when these are (rarely) available. DO NOT
under any circumstances, give your respondents any clue as to what the questions you are asking them mean (this is likely to be impossible in any case).DO NOT
give out any information on your questionnaire which would allow its validity to
You should avoid consistent terminology, preferably never using the same term for the same thing twice, (its OK to use it for something different) since this prevents a reader from relating your conclusions to your results.
When it comes to conclusions, ignore your results and lift your conclusions from the literature. No one will notice, or care. They are tired and bored and just want you to go away.
This student has learned these lessons well, (except maybe the last one) but may have overdone it a bit. I had a lot of difficulty relating the results reported
in the "Conclusions" section to the original results, and when I did they didn't seem to correspond exactly (see attached document tracing the interference effect
results, which gives some idea how difficult this is to do). The inconsistent terminology is chronic (see, for example, the comments on the abstract).
I've done little more than a basic English correction to this document. I think it may need a more extensive re-write, and this would probably require some of the attached questions to be answered."