BookFusion is an eBook platform that aims to redefine the reading experience. It allows users to read, share and have all their eBooks available across all devices. Readers are able to discover new books by accessing the global library of other readers with the capability to easily borrow/loan books to friends and family. With over 30,000+ free eBooks and a tightly integrated social experience, readers are able to find new books based on their social network or simply browsing the store. BookFusion is available on the Web and Android Tablets. At the end of this analysis, I intend to provide my client with useful, tested feedback that when implemented will improve the overall usability and acceptance of the application. The goal of this usability analysis is to provide the reader with in depth knowledge of the different methods used to test and evaluate the system, data gathered, analysis of the data and recommendations suggested to improve the system.
A competitive analysis was conducted to establish what makes BookFusion unique, to compare it with other similar products and to determine what works well, and what doesn’t between the systems. Similar software to Bookfusion are:
This amazon based application allows thousands of books to be read on a personal computer in color, with no Kindle unit required, for e-books purchased from Amazon’s store.
There are several similarities in the design of the two software including the display of books on the home page, the ability to adjust brightness/layout and the layout of the menu bar. However, the kindle provides a cleaner display when compared to that of my client’s. The ability to organize and create personal groups of books is also a functionality that could help users who own a large number of books.
The nook is an e-book reader created by Barnes and noble which can be used to access Nook reading material
Once again, we see the similarity between the front pages of the e-book readers. However, one distinct difference that can be seen with the nook is the titles underneath the books, this serves as a secondary means of identification for users.
There are even more eBook readers available including Scribd, Oyster, Bluefire, Moon+Reader, FB reader and alkido. One distinguishing feature that these systems have that was not seen on my clients, was the use of their logo on the header bar of the main page. This will help to reinforce the brand. Although each of these software provide similar services, they are limited to the type of content they provide. Bookfusion seeks to enable the reading and sharing of different types of eBooks across different
Bluefire reader Moon+Reader
Suggested changes based on competitive analysis
1. Removal of whitespace behind books (Kindle and Nook)
2. Addition of title text underneath books (Nook)
3. Create custom groups for books (Kindle)
4. Ability to search for words in books ( Kindle)
5. Adding logo to top bar (header bar) of home page (Kindle)
6. Enlarge items of interest (Oyster)
7. Percentage bar of completion underneath book (Moon+Reader)
8. Splitting taskbar between top bar (header) and bottom bar (footer) as seen in Bluefire
9. Allow orientation change while reading book. (This was noticed while using software)
The test administrator, Dhuel Fisher conducted contextual inquiries, interviews and Heuristic evaluations of the BookFusion application. Personas were also created to aid in usability testing (see attachment 6 for personas). The contextual inquiries and interviews were conducted with different users who used the system on their own mobile device. The heuristic evaluations were conducted by Dhuel Fisher another student within the HCC 729 class. The sessions captured each participant’s navigational choices, task completion rates, comments, overall satisfaction ratings, questions and feedback.
The main goal of these contextual inquiries were to see users interact with the system and get a good understanding of what features were intuitive and easy to use or hidden and problematic. The contextual inquiries were conducted with two participants. My first user (User A) was a female student who lives in Florida. The interview was conducted using skype on February 14, 2016 and lasted approximately 30 minutes. My second user (User B) was a female 23 year old UMBC graduate student. The contextual inquiry was done in the user’s apartment on February 15, 2016 and lasted roughly 30 minutes. Both users have never used this system before but they were familiar with a similar systems.
The contextual inquiry identified several problems:
– Users disliked books not having correct covers
– Back button always leads back to bookshelf
– Not all books are able to be enlarged with gestures and enlarge feature covers text when present.
– Profile information did not update
– There was not a wide variety of books
– Commenting and bookmarking features were hidden to some users
– Text alignment was off with certain books
The contextual inquiries acted as an introductory study to gauge which aspects of the interface should be focused on. After gathering this information, we needed to conduct further research to address these problems and find solutions to them. To do so, interviews were conducted. The interviews were structured interviews and the topics focused mainly on usability and the client’s ability to do tasks that client requested. User questions were closed with the opportunity for further explanation.
The interviews were conducted with the creator of the system, and two test users. The interviews were conducted in person at the interviewer’s apartment on February 21st 2016 and lasted approximately 15 minutes.
Overall, the responses were that the application was fairly easy to use and received an 80% rating from the client, 83% rating from user A and a 94% rating from user B.
The main issues identified through the interviews were:
– Hard to understand notifications
– Difficult to understand filtering
– Not enough books due to limited licensing
– Difficult to see bookmark feature
Another usability test performed was a Heuristic evaluation. A heuristic evaluation is a usability inspection software that helps to identify usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the “heuristics”).These evaluations were conducted by two members of the HCC 729 – Human Centered Design class. The main purpose of the Usability Action Report is to gather data about the usability of the product or design by a particular group of users for a particular activity or task within a particular environment or context. The heuristic analysis was used to assess the extent of system’s functionality, assess the effect of interface on user and to identify specific problems with system.
The issues received through the heuristic evaluations were:
– The white spaces around book titles
– Hidden notifications
– Hard to understand book filtering
– Not enough books
After these tests were performed, both low and medium level prototypes were created, tested and redesigned in an effort to create the best version of the system. The final focal points addressed by the prototypes were:
• The sort function
• The notification function
• The bookmarking functions.
This document contains the participant feedback, prototypes, observations, heuristic analyses, results from tests and participatory designs and recommendations for improvements. A copy of the scenarios and questionnaires are included in the Attachments’ section.
The target users of this application are the teenage to adult population. Although the system is available to the public and may be used by a wider range of individuals, the largest users of this system are expected to fall within this range. My client also intended for the system to be implemented on a large scale in Jamaican schools as part of a new technology in education program. The population that will be using the system, are expected to have the following characteristics:
1. Intermediate or high literacy ability
2. Above 12 years of age and under 60 years
3. Basic to high understanding of technology
4. Has obtained at least a primary school level of education
5. Average to high level of proficiency in English
6. Owns or has access to a smartphone or tablet
7. Has internet services when purchasing of books is necessary
8. Jamaican students (select group)
9. Are a part of a social network (Facebook or twitter)
10. Interest in reading e-books or digital content.
Outside of main target users, the system is useful for all individuals who which to read and share e-books. This includes children, teenagers, adults and the elderly. The Major tasks that the users will want to perform are:
1. Purchasing books
2. Reading books
3. Making bookmarks
4. Tracking progress
5. Sharing books
Personas of possible users were also created to aid in the usability testing
The system will be used for both learning and leisure. My client hopes that this system will be used in schools by both teachers and students as a way to facilitate learning, sharing and collaboration. This system will also be used greatly outside of educational setting for leisure purposes. With this in mind, the redesign has to be both informative and intuitive so that all readers can quickly and easily interact with the system. The system should also be professional while maintain an inviting presence. One useful feature, could be the ability to customize the display to each user’s preference.
The main goal that is trying to be supported is the bookmarking a specific page of a book. If a user is able to do this, they will have a good understanding of most features in the system and the locations of most features. The ability to know how to bookmark is important because it allows users to resume reading from their bookmarked position. During one of the observations conducted, the user did not recognize the bookmarking icon and was unable to bookmark a page. The sub goals are to teach the user how to purchase and start reading books which are also an integral part of the system. My client’s main focus was on the sign in/ log in process, bookmarking and the system’s UI. I chose to go with bookmarking for the task analysis because it also covers other areas.
Hierarchical Task Analysis
A Hierarchical task analysis is a hierarchical representation of what steps it takes to perform a task for which there is a goal and for which there is some lowest-level “action” or interaction. This analysis allows us to understand the processes necessary to complete specific tasks and analyze how multiple tasks may be connected. Within the Hierarchical task analysis, we assume that the user already has an account and is already signed into the system (See attachment 7 for Diagrammatic HTA)
1. Log into application
a. Sign in using Facebook
b. Sign in using twitter
c. Sign in using BookFusion
2. Purchase a book
a. Click menu icon at top left of screen
b. Go to store.
c. Search for book.
d. Click book.
e. Click add to library.
3. Read book
a. Click menu button
b. Go to book shelf.
c. Click on book.
d. Click read.
4. Bookmark page
a. Click the bookmark icon on top right of page.
b. Click add
5. Return to Bookmarked page
a. Open bookmarked book
b. Click on book icon
c. Select bookmark
Plan 1 for task 1
A. If user uses Facebook do 1a
Else if user uses Twitter do 1b
Else use 1c
Plan 1 for task 2
A. Do 1
B. Do 2
Plan 1 for task 3
A. Do 1
B. If bookshelf is empty or you want a new book
C. Do 3 and 4
Plan 1 for task 4
D. Do 5
Session and Recruitment Details
The recruited participants for both the contextual inquiry and the interviews were recruited through direct communication. The test administrator Dhuel Fisher e-mailed the attendees informing them of the test logistics and requesting their availability and participation. Participants responded with an appropriate date and time. Each individual contextual inquiry session lasted approximately thirty minutes and each interview lasted approximately fifteen minutes.
The test administrator introduced the users to this method by explaining how a contextual inquiry works and the different roles that we would both play. An environment of partnership was created by asking the user several questions while looking out for indicators such as “yes…but” or “kind of”. Depending on these answers, follow up questions would then be asked to ensure that the test administrator had a good understanding of what the users were trying to say/do. To create the right amount of focus, the test administrator asked the candidate specific questions about aspects that the client was interested in then allowed them to explore the systems for themselves to see what they were interested in.
For the interview sessions, the test administrator asked the user to do specific tasks then followed up with asking the user to rate the system on a one to ten ranking. Post-task scenario subjective measures included (See attachment 1 for full list of scenarios and post scenario questions):
• How would you rate the sign in process?
• How would you rate the purchasing of books?
• How difficult was it to place a bookmark?
• How difficult was it to return to a bookmarked page?
• How difficult was it to resize text
After the last task was completed, the test administrator asked the participant to rate then application on a one to ten ranking scale on two subjective measures including:
• Difficulty of maneuvering the system
• Overall graphical layout of the system
In addition, the test administrator asked the participants the following overall website questions:
• What observations did you make about the system that you think could use improvement?
• Would you recommend this system to a friend? Why?
See attachment 1 for subjective and overall questionnaires.
After conducting the interviews, it was realized that the ranking system may have resulted in unintentional biasing. To counteract this, several of the questions were changed and a Likert scale was used instead of the previous one to ten ranking. This is the proposed format for future studies (See attachment 2 for updated interview questions).
To conduct the Usability Action Report, the two parties Dhuel Fisher and Hwaju Chung, evaluated the interface according to ten heuristics (See attachment 3 for Dhuel Fisher’s UAR forms and attachment 4 for Hwaju Chung’s UAR forms). After these Usability Action Reports were gathered, they were aggregated and then sorted into different groups based on ratings and relationship (see attachment 5 for categorization of Usability Action Reports).
After the above methods were conducted, low fidelity prototypes were created with the adjusted changes recommended by the previous test users. These prototypes were then tested using a think aloud method. A participatory design session was also conducted afterwards to further improve upon the system (See attachment 8 for Low fidelity prototypes and attachment 9 for think aloud and participatory design solutions). UAR forms were also created to find any connections or similarities between the solutions (See attachment 10)
The three issues addressed in the prototypes were:
• The categories used to filter books
• The lack of book options
• False notifications
The low fidelity prototypes provided a lot of useful information but was unable to fully capture all the processes and features of the application. To better asses the applications features, medium fidelity prototypes were created using the information gathered through the low fidelity prototypes (The prototype of the system can be found at https://invis.io/UA6VNO3TC) (See attachment 11 for test protocol and test plan for medium fidelity prototype). When creating the medium fidelity prototype, the client suggested that the focus be moved from the lack of book options to the bookmarking feature due to limited ability to obtain books and the fact that the bookmarking feature was the main initial concern. The main improvements made on the medium fidelity prototype were
1. Added a sort button to the bookshelf – This addition to the prototype was added because one of the users in the think aloud session said that the menu icon to the right of the page was not clear or visible. She then suggested that a possible solution was to have a sort button which would be much more visible and easier to interact with.
2. Changed the color of the bookmarking icon – Many users were not able to see the Bookmark icon because it was close in color to the background. My solution was to make it yellow which would draw the user’s attention to it.
3. Removed the notification tab that did not have any notifications – This problem occurred each time the user logged into the system. A notification would appear but when entered, would be blank. To solve this problem, the notification tab was removed. One other possible solution would be to provide a welcome message with a coupon for the first book the user purchases.
I chose to make these changes because they were the most troubling to the users, and by fixing these, it would greatly improve the system. I also decided to add other changes to the prototypes based on other ideas received in the participatory design sessions such as:
1. Changing the sort categories – The sort categories in previous prototypes were difficult to understand according to both participants and a new set of categories were chosen.
2. Allowing swiping of books to sort them – One participant said that they would like to be able to sort their book in their own way and suggested the ability to swipe the books to different locations.
3. Replacing notification tab with individual group notifications – Since the removal of the notification tabs was not replaced with the welcome message, we chose to place notifications in other tabs which would reduce the number of tabs and also help the user to see what category each notification belonged to.
Finally, in an effort to gather more data and support my final conclusion, two users were asked to participate in an eye tracking study which looked at the time taken to recognize the bookmark feature and the focus that the original layout received versus the prototyped version (See attachment 11 for eye tracker instructions).
For the contextual inquiry, two participants both of which were female, were scheduled on February 14th and February 15th. These participants were contacted through the test Administrators contacts. For the interviews, two participants both of which were female, were scheduled on February 21st 2016. Participants were chosen based on availability and proximity to test administrator. Participant A is a current university student and participant B is a lawyer. Participants within the low fidelity, participatory design and medium fidelity studies were chosen from within UMBC’s ITE building. The two participants chosen for the eye tracker were 2 HCC Masters students from UMBC who were not familiar with the system.
Within the interviews, test participants attempted completion of the following tasks (see Attachment 2 for complete test scenarios/tasks:
• Create an account on the application
• Purchase a book of their choice
• Read the book and place a bookmark at desired page
• Exit book and return to bookmarked page
• Resize the text
Within the low fidelity prototype, the users were asked to
• Sort the bookshelf
• Select a specific notification
• Purchase a book
Within the medium fidelity prototype, the users were asked to
• Sort the bookshelf
• Select a specific notification
• Bookmark a page in a book
Within the eye tracker study, the user was asked to
• look for the bookmark feature
Contextual inquiries and interviews
Task Completion Success Rate
Both participants successfully completed task 1 (create an account on application). One out of the two participants had trouble completing the second task (purchase a book of their choice). Both participants were able to complete task 3 (Read the book and place a bookmark at desired page). One out of the two participants had trouble completing task 4 (Exit book and return to bookmarked page).
One out of the two participants had trouble completing task 5 (Exit book and return to bookmarked page).
Task Completion Rates
|Participant||Task 1||Task 2||Task 3||Task 4||Task 5|
After the completion of each task, participants rated the ease or difficult of completing the task. The 10-point rating scale ranged from 1 to 10 (See attachment 1).
|Participant||Task 1||Task 2||Task 3||Task 4||Task 5|
The following comments capture what the participants liked most:
• Ease of signing in and the easy to read font
The following comments capture what the participants liked the least:
• Limited number of books
• Difficult to leave comments
• Profile information does not update
• Hidden bookmarking
• Improper text alignment of some books
Recommendations for Improvement based on preliminary findings
• Provide tutorial at first login to show user where features are
• Increase available books
• Allow zooming gesture for all books
• Fix notification alert
• Fix book cover page display
• Fix text alignment in certain book
All participants were able to complete all tasks in the low fidelity prototype and medium fidelity prototype.
The following data was received from the interview and time recordings conducted during the medium fidelity interviews. User A and User B both used the original version of the interface but user C and D used the prototyped version.
|Participant||Pre interview question
|Pre interview question
|Pre interview question
|Participant||Task 1 completion time
As seen in the table above, there is a significant difference in the users that used the previous version of the system versus those that used the prototyped version. A summary and explanation of these results can be found in attachment 13.
Within the eye tracker test, participant A was shown the original layout of the application and participant B was shown the prototyped version. It is clear from the results, that there is a large difference in the noticeability of the bookmarking icon if its color is changed to yellow. More data and precise timing associated with the eye tracker will be added along with this analysis. There is a possibility of bias present in the eye tracker data because two different individuals were used in between each study. This was done in an effort to create the least bias possible. Because the location of the bookmark feature is the same in both versions and the only change is the color of the notification, a bias would be present since the user would be more likely to look in the general area of the first bookmarked icon thereby making it more likely for them to notice it.
|time taken to recognize Bookmark (sec)|
As you can see from the above data, there is a significant difference in the time it takes for the user to recognize the yellow bookmarked feature.
The recommendations section provides recommended changes and justifications driven by the participant success rate, behaviors, and comments. Each recommendation includes a severity rating. These ratings varying between low, medium and high, were given based on the impact that the problem may have on users.
Low severity – Although the problem exists, the user may not notice or will not be severely bothered by the problem.
Medium severity – The user may be aware of the problem but the overall functionality or usability of the system is not compromised.
High severity – There is a usability or functionality problem that must be addressed as soon as possible.
The following recommendations will improve the overall ease of use and address the areas where participants experienced problems or found the interface/information architecture unclear.
The following recommendations are based on the results from the interviews, prototypes and eye tracking tests conducted. They are ordered based on increasing severity.
Whenever the system is started, there is a notification at the side that is empty.
|· Remove false notification
· Replace with a coupon or welcome message
· Change notification so that they are grouped by respective categories
|1. Based on the initial interviews, this notification does not cause severe problems to the overall use of the system, it is a flaw which has the possibility to annoy users.
2. After creating several prototyped solutions to this problem, it was noticed that there could be a slight improvement to the system if this is changed.
Profile information does not update
The user used the system for several minutes but their reading time in their profile did not update.
|· Better updating of profile information||Although this information does update, it does not do so in real time which may bothersome users||Low|
Allow enlarging gesture
Not all books are able to be enlarged through the enlarging gesture and requires the user to go into settings to resize.
|· Allow all books to be enlarged through enlarging gesture||Many users are accustomed to the enlarging feature through their other devices. Consistency across all platforms will increase usability.||Low|
Improper text alignment of some books
During contextual inquiry, the user was upset that some books only took up half the page.
|· Ensure that all books occupy entire section of screen||Although this is not present in all books, this is still a serious issue when it occurs.||Medium|
White space around books
There is a white space that appears around book covers and varies between books.
|· Remove white space around books or ensure uniformity of white space.||Several users were distracted by the difference in white spaces around the books. This visual effect may dissuade users from using the system.||Medium|
Sort books by Tag
None of the users that interacted with this feature understood how it worked.
|· Remove ways in which books are sorted.
· Add sort by name, Author, genre, date purchased, custom
|1. The sort categories and phrasings were difficult for all users to understand.
2. As shown in the prototypes, a change in the name of the sorting tools allowed users to better understand them and make them more likely to be used
Show where everything is
One user said that the system was easy to use, but they did not know where all the functions were or that they did.
|· Provide a short tutorial at the beginning of the system.||This will allow all users to know what each function does and where to locate them.||High|
Change color of bookmarking feature
During the interviews, prototypes and eye tracker studies, the users were asked to bookmark a book.
|· Change the color of the bookmark feature from grey to yellow||1. Many participants were unable to recognize the bookmark feature because of its color.
2. During the eye tracker study, the color of the bookmark feature was changed to yellow and there was large increase in visibility.
Purchase book (Task 2)
Task 2 required participants to find a book of their choice and purchase it.
|· Add more books to the library||Participants from all studies searched for books within the store and was unable to find desired book. By providing more content, it will greatly improve usability.||High|
After conducting the original set of interviews, a large number of issues were recognized within the system. These issues were then focused by importance and the probability of the user encountering them. Through the low level prototyping and the participatory design, I was able to get feedback directly from users while working with them to improve the system by making changes that they would understand and accept. These ideas were then tested using the medium level prototyping accompanied with eye tracker data. It was seen that most of the suggested changes produced positive results and if implemented, would increase the overall usability of the application. All these collective sources of information influenced the final set of recommendations made. The overall portrayal of the system is that it is simple and easy to use. Having an application that will allows users to read, share and have all their eBooks available across all devices will be a great asset to users. By implementing the recommendations and continuing to work with users, this application has the potential to reach a wide ranging audience.