Projects and Presentations
Usability project on iphone 8 vs Samsung s8
• Eachteam will perform an experiment containing a usability test for two products.Your experiment will be guided by hypotheses identifying expected differences in performance on usability goal(s) (e.g., efficiency) and user-experience goal(s) (e.g., satisfaction). Remember, you are measuring usability. So, don’t have tasks like how long it takes for a device to come on or go off.
• You must have one objectively measured usability goal (e.g., time measuring efficiency.)In addition, I’d like to know your usability goal requirement levelfor each task and the rationale for it. You must specify a single usability goal requirement for each task. Remember, you’re simulating the process of establishing requirements. So, your usability goal requirements should represent “requirements,” not the average scores on your pilot test.
• I’d also like to know your user-experience goal requirement(s). For example, this may be an overallmean satisfaction score significantly better than the neutral point on a 5-point scale or, perhaps, that at least half of your participants say that, overall, they had a positive experience using the product. In addition, if you’d like, you may have specificuser-experience goal requirements for each task or differenttypes of questions you ask your participants.
• If you want, you can turn your two-group (products) experiment into a factorial design (two or more independent variables) by defining two different groups of participants (e.g., on the basis of age, gender, computing experience or some other independent variable dealing with people’s characteristics). You do not have to do this; it’s up to you.
• “Task” is actually an independent variable. The number of tasks defines the number of conditions (or levels)on the “task” independent variable. So, you actually have a factorial design. You do not have to worry about tasks as an independent variable or about factorial designs unless you choose to do so.
• Remember, we’re expecting you to implement good experimenter control over persons, procedures, and measurement when you perform your experiment, and to demonstrate that you have done so when presenting your project.
• 10 participants are enough; that is, 5 participations per team member for a two-person team. Each team member should be involved in testing each device to remove bias.More participants are better for experiments because I’m asking you to perform statistical tests for hypotheses. However youdo not have to have more than 20 participants. I realize this is just a class project and you have other classes.
• Have your participants voluntarily sign a short informed consent form before they participate in your project. Make sure to tell your participants(1) the general goal of your project; (2) what they’ll be doing; (3) that there are no risks; (4) that their responses will be kept private; and (5) that they can stop participating at any time without giving any reason.Your participants are doing you a favor, so please treat them accordingly. [Note: I do not want you conducting any experiment that has any possible physical or emotional risks to your participants.]
• To ensure good control, perform a pilot study with2 to 4 participants, depending on whether you’re using a within-subject or between-subject design, respectively.
• Basic structure of your presentation:
o Introduction
o Overall Usability and User Experience Hypotheses (Null and Alternative) and Rationalefor them
o Tasks (and rationale for them)
o Usability Goal Requirements (for each task and overall)
o User Experience Goal Requirements (could be for each task or a set of questions or just one overall user experience question)
o Method (how you did your experiment)
Experimental Design
Participants
Procedures
Pilot Test Results and Changes to Procedures
o Results (what you found out based on your statistical tests) for
Usability goal requirements and hypothesis
User experience goal requirements and hypothesis
o Discussion (general conclusion, reasons for your results, concerns, limitations, and proposed next steps, including possible suggested changes to the products)
o Appendix
One copy of Questionnaire
One copy of Informed Consent Form
Data (all of it, but not the names of your participants)
Printouts for All Statistical Tests
• Questions will not be asked until the team finishes their presentation. The only exception is that I can ask questions if I can’t understand the presentation. Also, I get the opportunity to ask the first question, if I want to do so.
• At least one team member needs to upload your presentation into the Blackboard assignment folder containing the date for your presentation. Failure to do so will result in both team members losing 2 points.I expect you to use flash drives or web access (e.g., your Blackboard access, not mine) to make your presentations.
• Allstudents must present part of their team’s presentation to the class. And, please, practice your talks. You may lose points if you’re so confused that you don’t know your part of the presentation.
• In addition, your team must give me a paper copy of your presentation when you present it, with no more than two (2) slides per page.
o Provide enough information on your slides for me to remember what you did when I grade the projects in a week (or more) after you give your presentation.
o But don’t present so much information that I can’t read your slides when you make your presentation.Feel free to use the Notes Page in PowerPoint when you give me the paper copy of your presentation, but you may have to print only one slide and notes page per page.
• Remember to make sure that I can read your slides, and particularly your graphs – particularly if you give me a black-and-white copy of your presentation. I’ll be grading from your paper copy and only going to your digital copy if I need to do so.
• I only need to see mean values (and possibly standard deviations) and the results of your statistical tests when you make your presentation.
• Remember, the statistical testsdepend on your:
o hypotheses,
o design (e.g., between- or within-subject for each of the independent variables in your experiment), andthe
o type of dependent measure (e.g., binary or continuous).
• You’ll be using the results of your statistical tests to make your conclusions.
o The stat tests will help you decide whether your data supports your hypothesesor not. The testswill not prove or disprove your hypotheses.
o Remember there are both Type I and Type II errors. There also is a difference between statistical and practical significance. These issues should help you decide your alpha level (e.g., p < 0.05) for decidingwhether or not the data support rejection of your null hypothesis.
• Figure out how you are going to analyze your data, including your questionnaire data, before collecting it.
o You only need to test your hypotheses statistically, not each goal requirement for each product.
o However, you must note whether or notyour goal requirements were met, on average, by both products for each task.You may want to perform a statistical test if the mean performance for one or both productsfail to achieve a goal requirement.(You also should note how many participants failed to reach the goal requirement level for each of your tasks.)
o Remember to perform statistical tests for your subjective data (questionnaires) too!
• You can use the statistical package of your choice. Here are some possibilities:
o Excel
For example, fora t-test to test “time” data for two products
• TTEST is under “Formulas, More Functions, Statistical” on the Excel menu bar
• Use “paired” for a within-subject design and “two sample” for a between-subject, “two group” design
• If you have a “two sample” design, first perform an F-test to conclude whether your two samples have equal or unequal variance(e.g., p < 0.05). [Alternatively, you can assume unequal variances and do a Welch’s t-test instead of a Student’s t-test. However, the Welch’s t-test is not in Excel or VassarStats below.]
Correlations (“CORREL”) and many other statistical calculations also can be found under “Formulas, More Functions, Statistical.” For example, you may want to determine if there is a significant correlation (i.e., relationship) between your usability and user-experience data.
I also have posted a set of screen shots that a student sent to me for enabling the Excel Data Solver, under Data on the menu bar, for doing the statistical analysis. It has the advantage of providing printouts for your statistical analyses.
o VassarStats: Statistical Computational Website (Google it)
For doing all the tests mentioned for Excel. In addition:
For testing proportions (e.g., of participants who made an error performing a task)
• against goal requirements (e.g., under “Frequency Data,” then “Binomial Probabilities”) or
• for two products (under “Proportions,” then “Significance of Difference Between Two Independent Proportions” for a between-subject design[or “correlated” for a within-subject design])
Single-Sample t-test for testingdata for continuous variables (e.g., mean time) against a goal requirement
Analysis of Variance (ANOVA) for a simple factorial design (e.g., products by tasks) with continuous data
Categorical data tables for factorial design with proportions, including a correction if your expected cell frequencies are too small (under “Frequency Data”).
Note: It’sdefinitely okay if you just do a series of t-tests or proportional tests if you’re not familiar with tests for factorial designs.I do not want the statistics to get in the way of you doing your project. However, remember that a factorial design (e.g., ANOVA) tests for interactions and better controls for Type I errors.
o Note if you useMinitab: To calculate if negative t-values have a significance level (alpha) less than or equal to 0.05 (i.e., p < 0.05) for a two-tailed test in Minitab, use greater than 0.975 (i.e., 1 – 0.975 = 0.025) as your cutoff.
Grading: Project Errors for Which I Will Take Off Points
• Failing to pay attention to comments made in this document or in the yellow notes on the example project
• Confusing null and alternative hypotheses
• Not telling me your user requirements
• Regarding your tasks
o Having a trivial set of tasks
o Not providing the rationale for them
• Regarding your usability goal requirements and your user-experience goal requirements
o Not having any
o Not indicating whether the requirements are met
On average, and
How many participants failed to meet the requirement for each task with each device (and, if appropriate, for each question for user experience)
• Regarding your method
o Failing to make it clear that you used good experimenter control regarding participants, procedures, and measurement
o Failing to randomlyassign participants to conditions for a between-subject design or use randomized counterbalancing for a within-subject design
• Regarding your pilot study
o Not doing one
o Only doing one to obtain usability goal requirements instead of doing it to find and fix problems with your testing procedures
o Not telling me what changes you made to your testing procedures
• Regarding the presentation of your results
o Not indicating your task or question names when presenting your results
o Not indicating which product performed significantly better based on statistical tests (e.g., p < 0.05)
• Regarding your statistical tests
o Not having any
o Doing them wrong: for example
You can’t have p > 1.0
Doing a “two-sample” t-test on Excel when you have a within-subject design or “paired” t-test when you have a between-subject design
o Not having your statistical printouts in the appendix so that I can make sure that you did your tests correctly
o Not ensuring that your slides correctly present the information in your printouts
• Regarding your conclusions
o Not using both your statistical analysis and your requirements analysis to make your conclusions
o Making the wrong conclusions (e.g., saying that you could not reject the null hypothesis when you could or vice versa)
• Not presenting enough information (or thinking critically) when your present your discussion/limitations
• Other things:
o Giving a poor presentation
o More than 2 slides/page
o Failing to upload your presentation
ww