Discussion Board #1 covers Chapters 1-4, and uses the following research topic:
1. http://www.npr.org/sections/health-shots/2015/01/20/378608279/the-inner-city-might-not-be-to-blame-for-high-asthma-rates (website)
2. Asthma Inner City NOT (PDF attached)
3. Textbook: ISBN: 9781483380612 (Key Terms attached)
DISCUSSION BOARD INSTRUCTIONS
For each discussion board, you will read a real research article, and tie 10 terms from the book to the article. For each term, you will define the term using the book, explain the definition, and then tie it into the article
Here is an example:
1. Sample generalizability: “Exists when a conclusion based on a sample, or subset, of a larger population holds true for that population.” In other words, sample generalizability is used to assist researchers in explaining a population without actually sampling the entire population. In the research article, the researchers studied 133,468 US men and women in different states, which is an adequate sample size of the population; however, as stated above the study used 99% percent of men in the HPFS are white, 97% of women in the NHS, and 99% of women in the NHS II. Therefore, the study findings may not be generalizable because nearly all the participants were well-educated white adults.
Each post should be well-developed, well-organized, and well-formatted, and free of writing errors. The post should demonstrate that you have truly mastered the terms involved and can apply them to real-world research.
Science, Society and Social Research
1. Overgeneralization Occurs when we unjustifiably conclude that what is true for some cases is true for all cases.
2. Selective or inaccurate observation: Choosing to look only at the things that are in line with our preferences or beliefs.
3. Illogical reasoning: The premature jumping to conclusions or arguing based on invalid assumptions.
4. Resistance to change: The reluctance to change our ideas considering additional information.
5. Science: A set of logical, systematic, documented methods for investigating nature and natural processes; the knowledge produced by these investigations.
6. Social science: The use of scientific methods to investigate individuals, societies, and social processes; the knowledge produced by these investigations.
7. Descriptive research: Research in which social phenomena are defined and described.
8. Exploratory research: Seeks to find out how people get along in the setting under question, what meanings they give to their actions, and what issues concern them.
9. Explanatory research: Seeks to identify causes and effects of social phenomena and to predict how one phenomenon will change or vary in response to variation in another phenomenon.
10. Evaluation research: Research that describes or identifies the impact of social policies and programs.
11. Validity: The state that exists when statements or conclusions about empirical reality are correct.
12. Measurement validity: Exists when an indicator measures what we think it measures.
13. Generalizability: Exists when a conclusion holds true for the population, group, setting, or event that we say it does, given the conditions that we specify; it is the extent to which study can inform us about persons, places, or events that were not directly studied.
14. Sample generalizability: Exist when a conclusion based on a sample, or subset, of a larger population holds true for that population.
15. Cross-population generalizability:
(external validity) Exist when findings about one group, population or setting hold true for other groups, populations, or settings.
16. Causal validity:
(internal validity) Exists when a conclusion that A leads to, or result in, B is correct.
The Process and Problems of Social Research
1. Social research question: A question about the social world that is answered through the collection and analysis of firsthand, verifiable, empirical data.
2. Theory: A logically interrelated set of propositions about empirical reality.
3. Inductive research: The type of research in which general conclusions are drawn from specific data.
4. Deductive research: The type of research in which a specific expectation is deduced from a general premise and is then tested.
5. Research circle: A diagram of the elements of the research process, including theories, hypotheses, data collection, and data analysis.
6. Hypothesis: A tentative statement about empirical reality involving a relationship between two or more variables. Example: The higher the poverty rate in a community, the higher the percentage of community residents who are homeless.
7. Variable: A characteristic or property that can vary (take on different values or attributes.) Examples: poverty rate, percentage of community residents who are homeless.
8. Dependent variable: A variable that is hypothesized to vary depending on or under the influence of another variable. Example: percentage of community residents who are homeless.
9. Independent variable: A variable that is hypothesized to cause, or lead to, variation in another variable. Example: poverty rate.
10. Direction of association: A pattern in a relationship between two variables that is, the value of a variable tends to change consistently in relation to change in the other variable. The direction of association can be either positive or negative.
11. Inductive reasoning: The type of reasoning that moves from the specific to the general.
12. Anomalous: Unexpected patterns in data that do not seems to fit the theory being proposed.
13. Serendipitous: Unexpected patterns in date, which stimulate new ideas or theoretical approaches.
14. Cross-sectional research design: A study in which data are collected at only one point in time.
15. Longitudinal research design: A study in which data are collected that can be ordered in time; also defined as research in which data are collected at two or more points in time.
16. Individual unit of analysis: A unit of analysis in which individuals are the source of data and the focus of conclusions.
17. Group unit of analysis: A unit of analysis in which groups are the source of data and the focus of conclusions.
18. Trend (repeated cross-sectional design: A longitudinal study in which data are collected at two or more points in time from different samples of the same population.
19. Panel design: A longitudinal study in which data are collected from the same individuals, the panel, at two or more points in time.
20. Cohort: Individuals or groups with a common starting point.
21. Cohort design: A longitudinal study in which data are collected at two or more points in time from individuals in cohort.
22. Units of analysis: The entities being studied, whose behavior is to be understood.
23. Ecological fallacy: An error in reasoning in which conclusions about individual-level processes are drawn from group-level data.
24. Reductionist fallacy (reductionism): An error in reasoning that occurs when incorrect conclusions about group-level processes are based on individual-level datat.
Ethics in Research
1. Obedience experiments (Milgram’s): A series of famous experiments conducted during the 1960s by Stanley Milgram, a psychologist from Yale University, testing subjects’ willingness to cause pain to another person if instructed to do so.
2. Nuremberg war crime trials: Trials held in Nuremberg, Germany, in the years following World War II, in which the former leaders of Nazi Germany were charged with war crimes and crimes against humanity; frequently considered the firs trials for people accused of genocide.
3. Tuskegee syphilis study: Research study conducted by a branch of the U.S. government, lasting for roughly 50 years (ending in the 1970s), in which a sample of African American men diagnosed with syphilis were deliberately left untreated, without their knowledge, to learn about the lifetime course of the disease.
4. Belmont Report: Report in 1979 of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research stipulating three basic ethical principles for the protection of human subjects: respect for persons, beneficence, and justice.
5. Respect for person: In human subjects ethics discussions, treating persons as autonomous agents and protecting those with diminished autonomy.
6. Beneficence: Minimizing possible harms and maximizing benefits.
7. Justice: As used in human research ethics discussions, distributing benefits and risks of research fairly.
8. Federal Policy for the Protection of Human Subjects: Federal regulations codifying basic principles for conducting research on human subjects; used as the basis for professional organizations’ guidelines.
9. Institutional review board (IRB): A group of organizational and community representativesrequired by federal law to review the ethical issues in all proposed research that is federally funded, involves human subjects, or has any potential for harm to subjects.
10. Office for Protection from Research Risk, National Institutes of Health: Federal agency that monitors institutional review boards (IRBs).
11. Debriefing: A researcher’s informing subjects after an experiment about the experiment’s purposes and methods and evaluating subjects’ personal reactions to the experiment.
12. Prison simulation study (Zimbardo’s): Famous study from the early 1970s, organized by Stanford psychologist Philip Zimbardo, demonstrating the willingness of average college students quickly to become harsh disciplinarians when put in the role of (simulatedz0 prison guards over other student; usually interpreted as demonstrating an easy human readiness to become cruel.
13. Tearoom Trade: Book by Laud Humphreys investigating the social background of men who engage in homosexual behavior in public facilities; controversially, he did not obtain informed consent from his subjects.
14. Health Insurance Portability and Accountability Act (HIPAA): A U.S. federal law passed in 1996 that guarantees, among other things, specified privacy rights for medical patients, in particular those in research settings.
15. Confidentiality: Provided by research in which identifying information that could be used to link respondents to their responses is available only to designated research personnel for specific research needs.
16. Certificate of Confidentiality: Document issued by the National Institution of Health to protect researches from being legally required to disclose confidential information.
Conceptualization and Measurement
1. Concept: A mental image that summarizes a set of similar observations, feelings, or ideas.
2. Conceptualization: The process of specifying what we men by a term. In deductive research, conceptualization helps trans late portions of an abstract theory into testable hypotheses involving specific variables. In inductive research, conceptualization is an important part of the process used to make sense of related observation
3. Constant: A number that has a fixed value in a given situation; a characteristic or value that does not change.
4. Operation: A procedure for identifying or indicating the value of cases on variable.
5. Operationalization: The process of specifying the operations that will indicate the value of cases on a variable.
6. Content analysis: A research method for systematically analyzing and making inferences from text.
7. Closed-ended (fixed-choice)questions: A survey question that provides preformatted response choices for the respondent to circle or check.
8. Mutually exclusive: A variable’s attributes (or values) are mutually exclusive when every case can be classified as having only one attribute (or value).
9. Exhaustive: Every case can be classified as having at least one attribute (or value) for the variable.
10. Open-ended question: A survey question to which respondents reply in their own words, either by writing or by talking.
11. Index: A composite measure based on summing, averaging, or otherwise combining the responses to multiple questions that are intended to measure the same concept.
12. Scale: Acomposite measure based on combining the responses to multiple questions pertaining to a common concept after these questions are differentially weighted, such that questions judged on some basis to be more important for the underlying concept contribute more to the composite score.
13. Triangulation: The use of multiple methods to study one research question.
14. Level of measurement: The mathematical precision with which the values of a variable can be expressed. The nominal level of measurement, which is qualitative, has no mathematical interpretation; the quantitative levels of measurement (ordinal, interval, and ration) are progressively more precise mathematically.
15. Nominal level of measurement: Variables whose values have no mathematical interpretation; the vary in kind or quality but not in amount.
16. Ordinal level or measurement: A measurement of a variable in which the numbers indicating a variable’s values specify only the order of the cases, permitting greater than and less than distinctions.
17. Interval level of measurement: A measurement of a variable in which the numbers indicating a variable’s values represent fixed measurement units but have no absolute, or fixed, zero point.
18. Ratio level of measurement: A measurement of a variable in which the numbers indicating the variable’s values represent fixed measuring units and an absolute zero point.
19. Face validity: The type of validity that exists when an inspection of items used to measure a concept suggests that they are appropriate “on their face.”
20. Criterion validity: The type of validity that is established by comparing the scores obtained on the measure being validated to those obtained with a more direct or already validated measure of the same phenomenon (the criterion).
21. Construct validity: The type of validity that is established by showing that a measure is related to other measures as specified in a theory.
22. Reliability: A measurement procedure yields consistent scores when the phenomenon being measured is not changing.
23. Test-retest reliability: A measurement showing that measures of phenomenon t two points in time are highly correlated, if the phenomenon has not changed or has changed only as much as the phenomenon itself.
24. Interitem reliability (internal consistency): An approach that calculates reliability based on the correlation between multiple items used to measure a single concept.
25. Alternate-forms reliability: A procedure for testing the reliability of responses to survey questions in which subjects’ answers are compared after the subjects have been asked slightly different versions of the questions or when randomly selected halves of the sample have been administered slightly different versions of the questions.
26. Split-halves reliability: Reliability achieved when responses to the same questions by two randomly selected halves of a sample are about the same.
27. Interobserver reliability: When similar measurements are obtained by different observers rating the same person, events, or places.
Chambliss, D. F., & Schutt, R. K. (2016). Making Sense of the Social World: Methods of Investigation (5th ed.). Thousand Oaks, California, United States of America: SAGE.
Last Updated on