Guidelines
Take the online course on MyEducator: SEM Online 3 credit Graduate Course Invitation Video
On this wiki page I share my thoughts on various academic topics, including my 10 Steps to building a good quantitative variance model that can be addressed using a well-designed survey, as well as some general guidelines for structuring a quantitative model building/testing paper. These are just off the top of my head and do not come from any sort of published work. However, I have found them useful and hope you do as well.
Pivotal Analytical SEM Decisions
What appear to be small decisions may have drastic consequences on the accuracy of your solution. Changing the direction of an airplane by a single degree can completely alter the destination (Dieter F. Uchtdorf). Below I've outlined and explained some of these critical decision points that arise during the SEM process.
- coming soon...
How to start any (EVERY) Research Project
LESSON: Improving your scientific writing
VIDEO TUTORIAL: Webinar Recording (Improving your scientific writing)
In a page or less, using only bullet points, answer these questions (or fill out this outline). Then share it with a trusted advisor (not me unless I am actually your advisor) to get early feedback. This way you don't waste your time on a bad or half-baked idea. You might also consider reviewing the editorial by Arun Rai at MISQ called: "Avoiding Type III Errors: Formulating Research Problems that Matter." This is written for the information systems field, but is generalizable to all fields.
- What is the problem you are seeking to address? (If there is no problem, then there is usually no research required. Also, a gap in research is not a problem. There must be a problem AND a gap in research. Also, please stick to ONE problem; otherwise your paper will likely be convoluted and bloated.)
- But what if you don't even know what problem to pursue? How do you find a problem? One way is to first consider the research domain (e.g., cyber security, human-AI relations, or healthcare systems), and then consider what the pain points might be in your chosen domain.
- If you're truly at a loss for ideas, check out the most recent track descriptions from your field's most prestigious conferences. For example, in Information Systems, you might take a look at the ICIS track descriptions, like these ones from 2020: 2020 ICIS Tracks
- Why is this an important (not just interesting) contemporary or upcoming problem? (i.e., old problems don't need to be readdressed if they are not still a problem)
- And, why is it a problem for your specific research field? (If it doesn't fall within your field, could it be reframed to fit your field? If not, then perhaps you should get tenure first and then worry about this problem...)
- Who else has addressed this problem? (Very rarely is the answer to this: "nobody". Be creative. Someone has studied something related to this problem, even if it isn't the exact same problem. This requires a lit review.)
- In what way are the prior efforts of others incomplete? (i.e., if others have already addressed the problem, what is left to study - what are the "gaps"?)
- How will you go about filling these gaps in prior research? (i.e., study design)
- Why is this an appropriate approach?
- If applicable, who is your target population for studying this problem? (Where are you going to get your data?)
- How are you going to get the data you want? (quantity and quality)
If you would like to use the answers to the above questions as the substance of your introduction section, just add these two points:
- Overall, what did your endeavors discover?
- How is your paper organized to effectively communicate the arguments and contributions you are trying to make?
Developing Your Quantitative Model
Ten Steps for Formulating a Decent Quantitative Model
- Identify and define your dependent variables. These should be the outcome(s) of the phenomenon you are interested in better understanding. They should be the effected thing(s) in your research questions.
- Figure out why explaining and predicting these DVs is important.
- Why should we care?
- For whom will it make a difference?
- What can we possibly contribute to knowledge that is not already known?
- If these are all answerable and suggest continuing the study, then go to #3, otherwise, go to #1 and try different DVs.
- Form one or two research questions around explaining and predicting these DVs.
- Scoping your research questions may also require you to identify your population.
- Is there some existing theory that would help explore these research questions?
- If so, then how can we adopt it for specifically exploring these research questions?
- Does that theory also suggest other variables we are not considering?
- What do you think (and what has research said) impacts the DVs we have chosen?
- These become IVs.
- What is it about these IVs that is causing the effect on the DVs?
- These become Mediators.
- Do these relationships depend on other factors, such as age, gender, race, religion, industry, organization size and performance, etc.?
- These become Moderators
- What variables could potentially explain and predict the DVs, but are not directly related to our interests?
- These become control variables. These are often some of those moderators like age and gender, or variables in extant literature.
- Identify your population.
- Do you have access to this population?
- Why is this population appropriate to sample in order to answer the research questions?
- Based on all of the above, but particularly #4, develop an initial conceptual model involving the IVs, DVs, Mediators, Moderators, and Controls.
- If tested, how will this model contribute to research (make us think differently) and practice (make us act differently)?
From Model Development to Model Testing
Video explanation of this section
Critical tasks that happen between model development and model testing
- Develop a decent quantitative model
- see previous section
- Find existing scales and develop your own if necessary
- You need to find ways to measure the constructs you want to include in your model. Usually this is done through reflective latent measures on a Likert scale. It is conventional and encouraged to leverage existing scales that have already been either proposed or, better yet, validated in extant literature. If you can’t find existing scales that match your construct, then you might need to develop your own. For guidelines on how to design your survey, please see the next section #Guidelines_on_Survey_Design
- Find existing scales
- I’ve made a
VIDEO TUTORIAL about finding existing scales. The easy way is to go to http://inn.theorizeit.org/ and search their database. You can also search google scholar for scale development of your construct. Make sure to note the source for the items, as you will need to report this in your manuscript.
- Once you’ve found the measures you need, you’ll most likely need to adapt them to your context. For example, let’s say you’re studying the construct of Enjoyment in the context of Virtual Reality. If the existing scale was “I enjoy using the website”, you’ll want to change that to “I enjoyed the Virtual Reality experience” (or something like that). The key consideration is to retain the “spirit” or intent of the item and construct. If you do adapt the measures, be sure to report your adaptations in the appendix of any paper that uses these adapted measures.
- Along this idea of adapting, you can also trim the scale as needed. Many established scales are far too large, consisting of more than 10 items. A reflective construct never requires more than 4 or 5 items. Simply pick the 4-5 items that best capture the construct of interest. If the scale is multidimensional, it is likely formative. In this case, you can either:
- Keep the entire scale (this can greatly inflate your survey, but it allows you to use a latent structure)
- Keep only one dimension (just pick the one that best reflects the construct you are interested in)
- Keep one item from each dimension (this allows you to create an aggregate score; i.e., sum, average, or weighted average)
- I’ve made a
- Develop new scales
- Developing new scales is a bit trickier, but is perhaps less daunting than many make it out to be. The first thing you must do before developing your own scales is to precisely define your construct. You cannot develop new measures for a construct if you do not know precisely what it is you are hoping to measure.
- Once you have defined your construct, I strongly recommend developing reflective scales where applicable. These are far easier to handle statistically, and are more amenable to conventional SEM approaches. Formative measures can also be used, but they involve several caveats and considerations during the data analysis stage.
- For reflective measures, simply create 5 interchangeable statements that can be measured on a 5-point Likert scale of agreement, frequency, or intensity. We develop 5 items so that we have some flexibility in dropping 1 or 2 during the EFA if needed. If the measures are truly reflective, using more than 5 items would be unnecessarily redundant. If we were to create a scale for Enjoyment (defined in our study as the extent to which a user receives joy from interacting with the VR), we might have the following items that the user can answer from strongly disagree to strongly agree:
- I enjoyed using the VR
- Interacting with the VR was fun
- I was happy while using the VR
- Using the VR was boring (reverse coded)
- Using the VR was pleasurable
- For reflective measures, simply create 5 interchangeable statements that can be measured on a 5-point Likert scale of agreement, frequency, or intensity. We develop 5 items so that we have some flexibility in dropping 1 or 2 during the EFA if needed. If the measures are truly reflective, using more than 5 items would be unnecessarily redundant. If we were to create a scale for Enjoyment (defined in our study as the extent to which a user receives joy from interacting with the VR), we might have the following items that the user can answer from strongly disagree to strongly agree:
- If developing your own scales, do pretesting (talk aloud, Q-sort)
- To ensure the newly developed scales make sense to others and will hopefully measure the construct you think they should measure, you need to do some pretesting. Two very common pretesting exercises are ‘talk-aloud’ and ‘Q-sort’.
- Talk-aloud exercises include sitting down with between five and eight individuals who are within, or close to, your target population. For example, if you plan on surveying nurses, then you should do talk-alouds with nurses. If you are surveying a more difficult to access population, such as CEOs, you can probably get away with doing talk-alouds with upper level management instead. The purpose of the talk-aloud is to see if the newly developed items make sense to others. Invite the participant (just one participant at a time) to read out loud each item and respond to it. If they struggle to read it, then it is worded poorly. If they have to think very long about how to answer, then it needs to be more direct. If they are unsure how to answer, then it needs to be clarified. If they say “well, it depends” then it needs to be simplified or made more contextually specific. You get the idea. After the first talk-aloud, revise your items accordingly, and then do the second talk-aloud. Repeat until you stop getting meaningful corrections.
- Q-sort is an exercise where the participant (ideally from the target population, but not strictly required) has a card (physical or digital) for each item in your survey, even existing scales. They then sort these cards into piles based on what construct they think the item is measuring. To do this, you’ll need to let them know your constructs and the construct definitions. This should be done for formative and reflective constructs, but not for non-latent constructs (e.g., gender, industry, education). Here is a video I’ve made for Q-sorting:
Q-sorting in Qualtrics. You should have at least 8 people participate in the Q-sort. If you arrive at consensus (>70% agreement between participants) after the first Q-sort, then move on. If not, identify the items that did not achieve adequate consensus, and then try to reword them to be more conceptually distinct from the construct they miss-loaded on while being more conceptually similar to the construct they should have loaded on. Repeat the Q-sort (with different participants) until you arrive at adequate consensus.
- To ensure the newly developed scales make sense to others and will hopefully measure the construct you think they should measure, you need to do some pretesting. Two very common pretesting exercises are ‘talk-aloud’ and ‘Q-sort’.
- Identify target sample and, if necessary, get approval to contact
- Before you can submit your study for IRB approval, you must identify who you will be collecting data from. Obtain approval and confirmation from whoever has stewardship over that population. For example, if you plan to collect data from employees at your current or former organization, you should obtain approval from the proper manager over the group you plan to solicit. If you are going to collect data from students, get approval from their professor(s).
- Conduct a Pilot Study
- It is exceptionally helpful to conduct a pilot study if time and target population permit. A pilot study is a smaller data collection effort (between 30 and 100 participants) used to obtain reliability scores (like Cronbach’s alpha) for your reflective latent factors, and to confirm the direction of relationships, as well as to do preliminary manipulation checks (where applicable). Usually the sample size of a pilot study will not allow you to test the full model (either measurement or structural) altogether, but it can give you sufficient power to test pieces at a time. For example, you could do an EFA with 20 items at a time, or you could run simple linear regressions between an IV and a DV.
- Often time and target population do not make a pilot study feasible. For example, you would never want to cannibalize your target population if that population is difficult to access and you are concerned about final sample size. Surgeons, for example, are a hard population to access. Doing a pilot study of surgeons will cannibalize your final sample size. Instead, you could do a pilot study of nurses, or possibly resident surgeons. Deadlines are also real, and pilot studies take time – although, they may save you time in the end. If the results of the pilot study reveal poor Cronbach’s alphas, or poor loadings, or significant cross-loadings, you should revise your items accordingly. Poor Cronbach’s alphas and poor loadings indicate too much conceptual inconsistency between the items within a construct. Significant cross-loadings indicate too much conceptual overlap between items across separate constructs.
- Get IRB approval
- Once you’ve identified your population and obtained confirmation that you’ll be able to collect data from them, you are now ready to submit your study for approval to your local IRB. You cannot publish any work that includes data collected prior to obtaining IRB approval. This means that if you did a pilot study before obtaining approval, you cannot use that data in the final sample (although you can still say that you did a pilot study). IRB approval can take between 3 days and 6 weeks (or more), depending on the nature of your study and the population you intend to target. Typically studies of organizations regarding performance and employee dispositions and intentions are simple and do not get held up in IRB review. Studies that involve any form of deception or risk (physical, psychological, or financial) to participants require extra consideration and may require oral defense in front of the IRB.
- Collect Data
- You’ve made it! Time to collect your data. This could take anywhere between three days and three months, depending on many factors. Be prepared to send reminders. Incentives won’t hurt either. Also be prepared to only obtain a fraction of the responses you expected. For example, if you are targeting an email list of 10,000 brand managers, expect half of the emails to return abandoned, three quarters of the remainder to go unread, and then 90% of the remainder to go ignored. That leaves us with only 125 responses, 20% of which may be unusable, thus leaving us with only 100 usable responses from our original 10,000.
- Test your model
- see next section
Guidelines on Survey Design
- Make sure you are using formative or reflective measures intentionally (i.e., know which ones are which and be consistent). If you are planning on using AMOS, make sure all measures are reflective, or be willing to create calculated scores out of your formative measures.
- If reflective measures are used, make sure they are truly reflective (i.e., that all items must move together).
- If any formative measures are used, make sure there is sufficient and equal representation from each dimension (i.e., same number of items per dimension).
- Make sure you are using the proper scale for each measure. Many scholars will mistakenly use a 5-point Likert scale of agreement (1=strongly disagree, 5=strongly agree) for everything, even when it is not appropriate. For example, if the item is “I have received feedback from my direct supervisor”, a scale of agreement makes no sense. It is a yes/no question. You could perhaps change it to a scale of frequency: 1=never, 5=daily, but a scale of agreement is not correct.
- Along these same lines, make sure your measures are not yes/no, true/false, etc. if they are intended to belong to reflective constructs.
- Make sure scales go from left to right, low to high, negative to positive, absence to presence, and so on. This is so that when you start using statistics on the data, an increase in the value of the response represents an increase in the trait measured.
- Use exact numbers wherever possible, rather than buckets. This allows you much more flexibility to later create buckets of even size if you want to. This also gives you richer data. For example, use a slider for exact age in years rather than age ranges.
- However, make sure to restrict what types of responses can be given for numbers. For example, instead of asking someone’s age with a text box entry, use a slider. This prevents them from giving responses like: “twenty seven”, “twenty-seven”, “twentisven”, “227”, and “none of your business”.
- Avoid including “N/A” and “other” if possible. These get coded as either 0 or 6 or 8, etc. but the number is completely invalid. However, when you’re doing statistics on it, your statistics software doesn’t know that those numbers are invalid, so it uses them as actual datapoints.
- Despite literature stating the contrary, I’ve found reverse coded questions a perpetual nightmare. They nearly always fail in the factor analysis because some cultures are drawn to the positive end of the scale, while others are drawn to the negative end of the scale. So they rarely actually capture the trait the way you intend. When I design new surveys, I nearly always re-reverse reverse coded questions so that they are worded in the same direction as the regular items.
- Measure only one thing with each item. Don’t ask about two things at once. For example, don’t include items like this: “I prefer face to face communication and don’t like talking via web conferencing.” This asks about two separate things. What if they like both?
- Include 4-6 items per reflective construct. Four is the ideal, but we might assume that a couple of the measures won't work out.
- Don’t make assumptions with your measures. For example, this item assumes everyone loses their temper: “When I lose my temper, it is difficult to think long term.”
- Make sure your items are applicable to everyone within your sampled population. For example, don’t include items like this: “My children are a handful.” What if this respondent doesn’t have children? How should they respond?
- Be careful including sensitive questions, or questions that have a socially desirable way to respond. Obvious ones might be like: “I occasionally steal from the office” or “I don’t report all my assets on my tax forms”. Regardless of the actual truth, respondents will enter the more favorable response. More subtle such measures might include: “I consider myself a critical thinker” or “sometimes I lose self-control”. These are less obvious, but still will result in biased responses because everyone thinks they are critical thinkers and no one wants to admit that they have anything less than full control over their emotions and self. If you do have sensitive questions, put them at the end of the survey so that you don't scare away respondents.
- Include an occasional attention trap so that you can catch those who are responding without thinking. Such items should be mixed in with the regular items and should not stand out. For example, if a set of regular items all start with “My project team often…” then make sure to word your attention trap the same way. For example, “My project team often, never mind, please respond with somewhat disagree”.
- Put your dependent variables at the beginning of the survey, just in case the respondent drops out early. At least this way you have the DV.
- If your independent and dependent variables are perceptual and collected at the same time with the same instrument (e.g., single survey), then make sure to include some sort of method bias scale, such as social desirability bias, or attitude toward the color blue (Google it).
Order of Operations for Testing your Model
Some general guidelines for the order to conduct each procedure
VIDEO TUTORIAL: SEM Speed Run
- The SEM Speed Run does almost everything listed below. However, I've also added below a few more links for the few items that either are not covered in the speed run, or have been updated since the speed run was made.
- Develop a good theoretical model
- See the Ten Steps above
- Develop hypotheses to represent your model: Causal_Models#Hypotheses
- Case Screening
- Missing data in rows
- Unengaged responses
- Outliers (on continuous variables)
- Variable Screening
- Missing data in columns
- Skewness & Kurtosis
- Exploratory Factor Analysis: Messy EFA
- Iterate until you arrive at a clean pattern matrix
- Adequacy
- Convergent validity
- Discriminant validity
- Reliability: Improving Reliability
- Confirmatory Factor Analysis
- Obtain a roughly decent model quickly (cursory model fit, validity)
- Do configural, metric, and scalar invariance tests (if using grouping variable in causal model)
- Response bias (aka common method bias, use specific bias variable(s) if possible): Method Bias Plugin
- Validity and Reliability check: Master Validity Plugin (if method bias was detected, remove the CLF or whatever variable is affecting all observed variables, while conducting this final validity check. You would then put it back in before imputing factor scores if there is bias.)
- Final measurement model fit: Model Fit Plugin
- Optionally, impute factor scores: Imputing Factor Scores
- Structural Models (which of these models you test depends on your theory; not all are always required)
- Multivariate Assumptions
- Outliers and Influentials
- Multicollinearity
- Include control variables in all of the following analyses
- Mediation
- Test indirect effects using bootstrapping
- If you have multiple indirect paths from same IV to same DV, use AxB estimand or: Specific Indirect Effects Plugin
- Interactions
- Standardize constituent variables (if not already standardized)
- Compute new product terms
- Plot significant interactions
- Multigroup Comparisons
- Create multiple models
- Assign them the proper group data
- Test significance of moderation via chi-square difference test: MGA Magic Plugin
- Multivariate Assumptions
- Report findings in a concise table
- Ensure global and local tests are met
- Include post-hoc power analyses for unsupported direct effects hypotheses
- Write paper
- See guidelines below
Structuring a Quantitative Paper
Standard outline for quantitative model building/testing paper
- Title (something catchy and accurate)
- Abstract (concise – 150-250 words – to explain paper): roughly one sentence each:
- What is the problem?
- Why does it matter?
- How do you address the problem?
- What did you find?
- How does this change practice (what people in business do), and how does it change research (existing or future)?
- Keywords (4-10 keywords that capture the contents of the study)
- Introduction (2-4 pages) The main purpose of the introduction is to convince the reader that this study is needed.
- What is the problem and why does it matter? And what have others done to try to address this problem, and why have their efforts been insufficient (i.e., what is the gap in the literature)? (1-2 paragraphs)
- What is your DV(s) and what is the context you are studying it in? Also briefly define the DV(s). (1-2 paragraphs)
- One sentence about sample (e.g., "377 undergraduate university students using Excel").
- How does studying this DV(s) in this context adequately address the problem? (1-2 paragraphs)
- What existing theory/theories do you leverage, if any, to pursue this study, and why are these appropriate? (1-2 paragraphs)
- Who else has pursued this research question (or something related), and why were their efforts insufficient? (see section above about "how to start every research project") (1-2 paragraphs)
- Briefly discuss the primary contributions of this study in general terms without discussing exact findings (i.e., no p-values here).
- How is the rest of the paper organized? (1 paragraph)
- Literature review (1-3 pages) The main purpose of the literature review section is to establish who else has addressed a similar research question, and how your study will extend or clarify these. This helps both for positioning your contribution within extant literature, and for motivating why your study is needed (beyond these extant studies).
- Fully define your dependent variable(s) and summarize how it has been studied in existing literature within your broader context (like Information systems, or, Organizations, etc.).
- If you are basing your model on an existing theory/model, use this next space to explain that theory (1 page) and then explain how you have adapted that theory to your study.
- If you are not basing your model on an existing theory/model, then use this next space to explain how existing literature in your field has tried to predict your DV(s) or tried to understand related research questions.
- (Optionally) Explain what other constructs you suspect will help predict your DV(s) and why. Inclusion of a construct should have good logical/theoretical and/or literature support. For example, “we are including construct xyz because the theory we are basing our model on includes xyz.” Or, “we are including construct xyz because the following logic (abc) constrains us to include this variable lest we be careless”. Try to do this without repeating everything you are just going to say in the theory section anyway.
- (Optionally) Briefly discuss control variables and why they are being included.
- Theory & Hypotheses (take what space you need, but try to be parsimonious) The main purpose of this section is to justify your theory (provide rational rationale for the relationships you are suggesting).
- Briefly summarize your conceptual model and show it with the Hypotheses labeled (if possible).
- Begin supporting H1 then state H1 formally. Support should include strong causal logic and literature.
- H2, H3, etc. If you have sub-hypotheses, list them as H1a, H1b, H2a, H2b, etc.
- Methods (keep it brief; many approaches; this is just a common template) The main purpose of this section is to convince the reader that you chose the right method and that you did it correctly.
- Construct operationalization (where did you get your measures?)
- Instrument development (if you created your own measures)
- Explanation of study design (e.g., pretest, pilot, and online survey)
- Sampling (some descriptive statistics, like demographics (education, experience, etc.), sample size; don`t forget to discuss response rate (number of responses as a percentage of number of people invited to do the study)).
- Mention that IRB exempt status was granted and protocols were followed if applicable.
- Method for testing hypotheses (e.g., structural equation modeling in AMOS). If you conducted multi-group comparisons, mediation, and/or interaction, explain how you kept them all straight and how you went about analyzing them. For example, if you did mediation, what approach did you take (hopefully bootstrapping)? Were there multiple models tested, or did you keep all the variables in for all analyses? If you did interaction, did you add that in afterward, or was it in from the beginning?
- Analysis (1-3 pages; sometimes combined with methods section) The main purpose of this section is to convince the reader that you're analysis was done correctly and your hypotheses were tested appropriately.
- Data Screening
- EFA (report pattern matrix and Cronbach`s alphas in appendix) – mention if items were dropped.
- CFA (just mention that you did it and bring up any issues you found) – mention any items dropped during CFA. Report model fit for the final measurement model. Supporting material can be placed in the Appendices if necessary.
- Mention CMB approach and results and actions taken if any (e.g., if you found CMB and had to keep the CLF).
- Report the correlation matrix, CR and AVE (you can include MSV and ASV if you want), and briefly discuss any issues with validity and reliability – if any.
- Report whether you used the full latent SEM, or if you imputed factor scores for a path model.
- Report the final structural model(s) (include R-squares and betas) and the model fit for the model(s).
- Findings (1-2 pages) The main purpose of this section is to report the results of your hypothesis tests.
- Report the results for each hypothesis (supported or not, with evidence).
- Point out any unsupported or counter-evidence (significant in opposite direction) hypotheses.
- Provide a table that concisely summarizes your findings.
- Discussion (2-5 pages) The main purpose of this section is to convince the reader of your contributions, and to expand their understanding of you findings.
- Summarize briefly the study and its intent and findings, focusing mainly on the research question(s) (one paragraph).
- What insights did we gain from the study that we could not have gained without doing the study?
- How do these insights change the way practitioners do their work?
- How do these insights shed light on existing literature and shape future research in this area?
- What limitations is our study subject to (e.g., surveying students, just survey rather than experiment, statistical limitations like CMB etc.)?
- What are some opportunities for future research based on the insights of this study?
- Conclusion (1-2 paragraphs) The main purpose of this section is to motivate the reader to use your work in their own work.
- Summarize the insights gained from this study and how they address existing gaps or problems.
- Explain the primary contribution of the study.
- Express your vision for moving forward or how you hope this work will affect the world.
- References (Please use a reference manager like EndNote)
- Appendices (Any additional information, like the instrument and measurement model stuff that is necessary for validating or understanding or clarifying content in the main body text.)
- DO NOT pad the appendices with unnecessary statistics tables and illegible statistical models. Everything in the appendix should add value to the manuscript. If it doesn't add value, remove it.
Dissertation Examples using SEM
If you would like to see some examples of dissertations using SEM, here are some from the graduates of the Doctorate of Business Administration program at Weatherhead School of Business at Case Western Reserve University. I cannot guarantee that they are all exemplary, but you will find many examples here. Not all use SEM as the primary method, but most should contain chapters with SEM.
My Thoughts on Conference Presentations
I've presented at and attended many many conferences. Over the years, I've seen the good, the bad, and the ugly in terms of presentation structure, content, and delivery. Here are a few of my thoughts on what to include and what to avoid.
What to include in a conference presentation
- What’s the problem and why is it important to study?
- Don’t short-change this part. If the audience doesn’t understand the problem, or why it is important, they won’t follow anything else you say.
- Who else has researched this and what did they miss?
- Keep this short; just mention the key studies you’re building off of.
- How did we fill that gap?
- Theoretically and methodologically
- What did we find, what does it mean, and why does it matter?
- Spend most of your time here.
- This should include how the findings change the way we think (implications for and responses to research).
- Also include how this changes the way we work/behave (implications for practice).
- What are the natural next steps?
- What did you not get to do in this study that would have been awesome?
- Don't get too creative here. Stay within the same domain as the current study.
- The end. Short and sweet.
What to avoid in a conference presentation
- Long lit review
- Completely unnecessary. You don’t have time for this. Just mention the key pieces you’re building off of.
- Listing all hypotheses and explaining each one
- Just show a model (or some illustrative figure) and point out the most important parts.
- Including big tables of statistics (for quant) or quotes (for qual)
- Just include a model with indications of significance if a quantitative study.
- Just include a couple key quotes (no more than one per slide) if a qualitative study.
- Back story on origination of the idea
- Don’t care unless it’s crazy fascinating and would make a great movie.
- Travel log of methodology
- Again, don’t care. We figure you did the thing right.
- Statistics on model validation and measurement validation.
- Again, we figure you did the thing right. We’ll read the paper if we want to check your measurement model.
- Repeating yourself too much
- The time is short. There is no need to be redundant.
- Using more words than images
- Presentations are short and so are attention spans. Use pictures with a few words only. I can’t read your slide and listen to you at the same time. I bet you’d rather I listen to you than read your slide.
- Reading the entire prepared presentation...
- Yes, that has happened, more than once... cringe...
- Failing to take notes of feedback
- Literally write down on paper the feedback you get, even if it is stupid. This is just respectful.