Выбери формат для чтения
Загружаем конспект в формате pdf
Это займет всего пару минут! А пока ты можешь прочитать работу в формате Word 👇
РЭУ им. Г.В. Плеханова
Кафедра предпринимательства и логистики
Lecture 3: documenting research results
к.э.н. Завьялов Дмитрий Вадимович
Zavyalov_d@inbox.ru
Zavyalov.DV@rea.ru
Documenting research results (Thesis)
Thesis
composition
Subject headings
and outline
Language and
style
Document
editing
Language, style, format
• Personal vs. impersonal
“I measured the frequency at regular intervals.”
“The frequency was measured at regular intervals.”
• Formal vs. informal
Use neutral words
Avoid controversial labels and descriptions
Avoid slang and contractions (don’t, couldn’t, won’t)
Careful in translating from you native language
• Political correctness
• References
How plagiarism is detected
• Student’s paper sounding too professional or journalistic exceeding
his/her research writing capabilities
• Student’s paper contains complex or specialized vocabulary, technical
terms or other words beyond the student’s expected writing level
• Inconsistency of writing quality
• Plagiarism detection software (Copyscape, Antiplagiat, Grammarly)
• Online search engines
General remarks
•
•
•
•
•
•
Do your best with what you have
Assumptions and limitations are your conclusions’ “Firewall”
Every statement has its sources: either prior research or your findings
Thus, everything you say can be used against you
Graphs are to support the author’s point, and not to pose additional questions
Data-graphs-text must be easy to switch from one another
Logical order
Theory, literature review
Define problem
Hypothesis
Test
Interpretation
Formulate
answers
Applicability, world
impact
Sample outline
1.
Introduction
•
•
•
•
2.
Chapter 1 (Background and methodology)
1.
2.
3.
3.
Definition of terms
Theoretical background
Background info (e.g. company, industry info)
Chapter 2 (Research)
1.
2.
3.
4.
4.
Introductory paragraphs, statement of the problem
Theoretical framework
Hypothesis, significance of the study
Scope and limitations, assumptions
Type of research
Sampling method, respondents, questionnaire
Procedure and timeframe
Analysis plan, validity and reliability, assumptions
Chapter 3 (Results and conclusion)
1.
Presentation and analysis of data
2. Discussion of findings, explaining why such results were obtained
3.
Overall conclusion and recommendations
Bibliography
Appendix
Introductory part
• General presentation of the research problem
• Convince the reader that you have identified a research
problem, worthy of investigating
• Purpose and exact direction of the paper
• Establish Rationale: research is necessary questions for
which there is no answer yet
• Author’s intent
Keep it short
Define the problem
Keep it organized
Objectives
• Provide a clear statement of the overall question- General problem
• Follow it with action oriented task - Specific objective
• If more than one specific objective state them sequentially
Introduction
• Statement of the problem
Thesis statement or hypothesis statement
• Significance of the study
Relevance of the study
Relation to larger issues in our society
Justify the need to conduct your research
• Scope and limitations
Weaknesses in your experiment
• Assumptions
Background
• Theory and literature review
keep it short
• Definition of terms
Explain complex or technical terms
Operational definitions
List of abbreviations
• Country, industry, company etc.
Only relevant data
Methodology
•
•
Explain the choice of methods
Description of the materials and equipment used in
the research
• Explanation of how the samples were gathered
and prepared, details of techniques used
Extra data goes in the appendix
• Explanation of how measurements were made and
what calculations were performed upon the raw
data
Methodology and research
• State the choice of methods explicitly
• Enough detail for the reader to follow
• First give an overall summary of your study design and
•
•
methodological approach.
Then provide the methodology for each specific objective.
Describe
•
•
•
•
•
the specific design (what will you do and how, number of replicates,
etc.),
the materials and techniques that are used
the feasibility of these techniques
use literature to support design, materials & techniques
need not to explain standard procedures – but give a reference
Results and conclusion
•
•
•
•
•
•
Do not include to much info – only the relevant things
Describe the course of experiment and what you found
Do not use vague explanations - use facts and figures
For quantitative research use numerical data; for qualitative
research – include observations
Include negative results
Use tables, pictures, graphs
Easy to read and comprehend
Use only if relevant
If small – put them within the text itself
Must contain as much as it needs but no more than necessary
Discussion
•
•
•
•
•
•
•
Your own interpretation of the work
Explain links and correlation in your data
Explain the meaning of the results
Underline the significance of your study: results always generate
something of value
Be honest and criticize the experiment a bit. Suggest modifications
and improvements
Compare the results with previous research
Explain how the results of your research change the world but do not
be too broad in your generalization. You probably can not change the
world that much
Conclusion
•
•
•
•
•
What has your research shown?
Sum up the paper
Brief description of results
How has it contributed?
Importance of the study
Benefits to the readers (industry, company)
What were the shortcomings?
Problems with research methods and them
influence on the final result
Unanswered questions?
Openness for future research
Can the results be used in the real world?
Pitfalls
•
•
•
•
•
Grammar and punctuation
Person and tense
Waffling
One-sided
Readability and flow
Proofreader?
•
•
•
Hire someone competent
Allow at least 48 hours to pass before proofreading
yourself
Promise to buy a beer or two for a friend if he/she reads
through your work
Graphical excellence
The principals of Graphical Excellence (GE) are:
• GE is the well-designed presentation of interesting data – a matter of
substance, of statistics, and of design.
• GE consists of complex ideas communicated with clarity, precision, and
efficiency.
• GE is that which gives to the viewer the greatest number of ideas in the
shortest time with the least ink in the smallest space.
Pie Chart
• Good for comparing
similar data
• Check to be sure the
percentages add up to
100%
• Beware of slices of the
pie called "Other"
If your goal is to manipulate, mislead, or cheat use sophisticated
Graphical Displays
3%
9%
2%
Var1
Var2
Var3
Var4
Var5
Var6
Var7
Var8
Var9
Var10
Var11
Var12
Var13
VAr14
5%
8%
11%
9%
3%
18%
6%
4%
7%
6%
12%
Avoid three-dimensional pie charts; they don't
show the slices in their proper proportions
Var1
Var2
Var3
Var4
Var5
Var6
Var1
Var2
Var3
Var4
Var5
Var6
13%
23%
16%
17%
12%
19%
Distortion: to change slice circumference
(perimeter)
Distortion: to hide angles and
straightness of the line
1
1
2
2
3
3
4
4
1
2
3
4
1% 4%
1973
2000
Nuclear
23%
19%
Renewable
11%
6%
Coal
20%
Oil
Gas
Oil
Gas
Renewable
22%
Coal
53%
Nuclear
41%
Graphs should not
be too complex: it
causes additional
questions
Graphs should not be too simple
Polar bear population
•The World Wildlife Fund (WWF) has written
on the threats posed to polar bears from
global warming
Source: WWF, “Polar Bears at risk”
Polar bear population
•
•
•
•
However, also according to them,
about 20 distinct polar bear
populations exist, accounting for
approximately 22,000 polar bears
worldwide.
Only 2 of the groups are decreasing, 10
populations are stable, 2 populations
are increasing.
The status of the remaining 6
populations is unknown
If you only looked at the 2 groups that
are decreasing, it would be easy to say
that “Polar Bear Population is
“Decreasing”. You need to look at the
whole picture to get the whole story.
16%
Stable
45%
14%
Unknown
Increasing
25%
Source: WWF, “Polar Bears at risk”
Decreasing
Total Number of employees
6000
5000
4000
3000
Total Number of employees
2000
1000
2009
2010
2011
This bar chart shows us total number of employees in all respondent
companies in 2009 till 2011. The total number of employees was
extending each year. In 2009 the number of employees equals 3787. In
2010 this figure put up and reached 4307. Finally in 2011 total amount of
employees climbed to 5274 people. So I can make a conclusion that if
the number of employees has been growing each year than
government support helps to create new jobs.
Bar Chart: money spent on transportation by
people in different household-income groups
How Graphs Can Distort Statistics
520
510
500
490
Number
Drawn
No. of Times
Drawn out of
4,839
Percentage of Times
Drawn (No. of Times
Drawn ÷ 4,839)
480
470
460
1
2
3
4
5
6
7
8
9
485
468
513
491
484
480
487
482
475
474
10.0%
9.7%
10.6%
10.1%
10.0%
9.9%
10.1%
10.0%
9.8%
9.8%
450
440
1
2
3
1
2
3
4
5
6
7
8
9
12,0%
10,0%
8,0%
6,0%
4,0%
2,0%
0,0%
4
5
6
7
8
9
Time Chart
•
Check out the scale and
start/end points on the vertical
axis. Large increments and/or
lots of white space make
differences look less dramatic;
small increments and/or a plot
that totally fills the page
exaggerate reality
•
It's misleading to show equally
spaced points on the horizontal
(time) axis for 1990, 2000, 2005,
and 2010
•
Make sure it is appropriate to
compare the units on the
vertical axis over time.
Time Chart
12
11
10
10
9
8
8
6
7
4
6
2
5
1940
1950
1960
1970
1980
1990
2000
2010
4
1940
1950
1960
1970
1980
1990
2000
2010
Other distortions
600
600
500
500
400
400
300
300
200
200
100
100
1980
1985
1990
1995
2000
2005
2010
2015
1980
1985
1000
1990
1995
2000
2005
2010
160%
140%
120%
100
100%
80%
60%
10
40%
20%
0%
1
1980
1985
1990
1995
2000
2005
2010
2015
1990
1995
2000
2005
2010
2015
2015
Personnel satisfaction with medical equipment
0,8
0,7
0,6
0,5
Admin
Docs
0,4
Paramed
0,3
Support
0,2
0,1
1
2
3
4
5
6
450
Fatality Risk:
Death rate per mile relative to rail (=1)
400
388
350
Distorting
values
300
250
200
150
100
86
50
0,9
1
3,4
Bus
Air/Rail
Van
9,1
Car
Bicycle
Motorcycle
Source: The Times
Unit fraud: unclear definitions of units,
especially when they contradict everyday
usage
140
130
120
Output per worker
110
Output per hour worked
100
Output per person of
working age
90
80
US
France
Germany
BBC on house prices
Research errors
Questionnaire Studies
•
•
•
•
•
Using a questionnaire to work with problems that require other research
techniques
Not giving enough care to the development of the questionnaire and testing it
Asking too many questions, thus making unreasonable demands on the
respondents’ time
Overlooking details of format, grammar, printing, and so on that can influence
respondents’ first impression
Not checking a sample of non-responding subjects for possible bias in the
questionnaire.
Interview Studies
• Not adequately planning the interview or developing the interview guide
• Not conducting sufficient practice interviews to acquire needed skills
• Failing to establish safeguards against interviewer bias
• Not making provisions for calculating the reliability of the interview data
• Using language in the interview that the respondents won’t understand
• Asking for information that the respondents cannot be expected to have
Observational Studies
• Not
sufficiently training observers and thus obtaining
unreliable data
• Using an observation procedure that demands too much of
the observer
• Failing
to safeguard against the observer’s disturbing or
changing the situation being observed
• Attempting to evaluate behavior that occurs so infrequently
that reliable data cannot be obtained through observations
Experimental Studies
•
•
•
•
•
Inadvertently or otherwise treating the experimental and control groups
differently, thus leading to biased findings
Using too few cases, leading to large sampling errors and insignificant results
Failing to divide the main groups into subgroups in situations where subgroup
analysis may produce worthwhile knowledge
Matching the subjects in the experimental and control groups on criteria that have
little to do with the variables being studied
Attempting to match control and experimental groups on so many criteria that in
the process you lose a large number of subjects who cannot be matched
Content Analysis Studies
Content Analysis Studies - finding the patterns within some type of material
(e.g., texts, transcripts of conversations, videotapes of classroom
interactions, etc)
• Selecting content that is easily available but is not an unbiased sample
• Selecting some content that is not really related to the research objectives
• Failing to determine the reliability of the content-analysis procedures
• Using classification categories that are not specific yet comprehensive
Relationship (Correlation) Studies
•
•
Assuming that a correlation between pieces of data is proof of a cause-and effect relationship
•
Trying to build a correlation study around conveniently available data instead of collecting
the data needed to do a worthwhile study
•
•
•
Selecting variables for correlation that have been found non-productive in previous studies
Using a sample in correlation research that differs on so many variables that comparisons of
groups are not interpretable
Failing to use appropriate disciplinary theory in selecting variables to study
Using simple correlation techniques in studies where partial correlation or multiple
correlation is needed to obtain a clear picture of the way the variables are operating
Four sources of errors
• Administrator (interviewer, programmer, facilitator, etc.)
• Respondent
• Instrument (e.g. the survey questionnaire)
• Mode of data collection
• The 5th source (in business studies): organization
Administrator
•
•
•
•
•
•
•
•
Rewording questions
Accentuating certain words
Skipping questions
Recording wrong answers
Affecting the respondent's behavior
Incorrect data entry, coding, or programming
Sample selection error – unrepresentative sample due to error in sample design
Administrator falsifies questionnaires, responses or other data
Respondent error: error resulting from some
respondent action or inaction
•
Nonresponse error – statistical differences between a survey that
includes only those who responded and a perfect survey that would
also include those who failed to respond
•
Deliberate falsification – respondent may wish to appear “better”
than he/she really is
•
•
Acquiescence bias - respondents are agreeable rather than truthful
•
Extremity bias - respondents provide extreme responses to all
questions
•
Carelessness - respondents do not read or complete survey
carefully
Auspices bias - response bias caused by respondent being
influenced by the sponsor of the study
Respondent error: error resulting from some
respondent action or inaction
• “Laziness” error - respondent gives an “average” answer
• Proxy response error – taking answers from someone other
than the respondent
• Misunderstanding
requirements
error - respondent misunderstands the
Non-response errors
Non-response errors are all errors arising from:
• Unit non-response, i.e. failure to obtain
information from a
pre-chosen sampling unit or population unit
• Item non-response, i.e. failure to get a
question or item in the data recording form
response to a specific
Instrument
•
•
•
•
•
The choice of research instrument is wrong
The question is unclear, ambiguous or difficult to answer
The list of possible answers suggested in the recording
instrument is incomplete
Requested information assumes a framework unfamiliar to the
respondent
The definitions used by the survey are different from those
used by the respondent (e.g. how many part-time employees
do you have?)
Data collection
• Errors in transmission of data from the field to the office
• Errors in preparing the data in a suitable format for computerisation, e.g.
during coding of qualitative answers
• Errors during data analysis, e.g. imputation and weighting
Organization
• Inaccurate, outdated, incomplete data
• Data is difficult to access
• Data is unavailable for the unit of observation
• Falsified data
Errors in gathering research data
•
Not paying enough attention to establishing and maintaining rapport with the subjects. This often leads to
refusals to cooperate or to a negative attitude that can reduce the validity of tests and other measures
•
•
Weakening the research design by making changes merely for convenience
•
Failing to evaluate available measures completely before selecting those to use in the research. This often
leads to the use of invalid or inappropriate measures
•
•
Selecting measures to use in the research that have such low reliability that true differences are hidden
Failing to explain the purposes of measures used in the research to those who will be administering the
measure. If a research assistant thinks a test or measure is silly or worthless, subjects may easily sense
his/her attitude, leading to poor cooperation
Selecting measures to use in the research that the researcher is not qualified to administer and score.
Errors in processing data
• Failing to set up a systematic routine for scoring and recording data
• Not recording details and variations in scoring procedures when scoring
data and then being unable to remember what was done when called upon
to describe the procedure in the report
• Not checking the scoring for errors
• Changing the scoring procedure during the process of scoring the research
data
Errors when using Standard Measuring Instruments
•
Failing to check the content validity of achievement measures in
the situation in which the research is to be carried out. That is, an
achievement measure may be valid in one situation but not in
another
•
Failing to standardize or control the role of the person
administering the measure in the data collection situation. That
failure introduces variations in the amount and kind of assistance
given the subjects during the test
•
Checking the overall validity and reliability of measures selected
but failing to check validity and reliability of data
•
Using personality inventories and other self-reporting devices in
situations in which the subject might be expected to fudge answers
to create a better impression (e.g. self-assessment tests during job
interviews)
Errors when using Standard Measuring Instruments
• Assuming that standard tests measure what they claim to
measure without thoroughly evaluating available validity data
• Attempting
to use measures that the researcher is not
sufficiently qualified to administer, analyze, or interpret
• Failing to use the testing time well. For example, a researcher
might wrongly administer long tests when shorter ones are
available that meet the requirements of the research project
equally well
• Not carrying out a pretrial of the measuring instruments and
procedures, thereby making mistakes when collecting the
data, and introducing bias.
Errors when using statistical tools
•
Selecting a statistical tool that is not appropriate or correct for the proposed
analysis
•
Collecting research data and then trying to find a statistical technique that can be
used to analyze them
•
Using only one statistical procedure when several can be applied to the data. This
often leads to overlooking results that could have made a significant contribution
to the research
•
Overstating the importance of small but statistically significant differences.
• Thfya