Category Archives: Reading Reflection

A Tale of Two Worlds – Reflections on Distraction in the Age of Internet

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was great the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way. ” – Charles Dickens

I never thought Dickens’ classical portrait of the age of French Revolution can be so appropriate to describe the age we are living in – We have everything at our fingertips and we have almost nothing really being preserved solid. We believe and embrace the power of technology wholeheartedly, yet we suspect we are being disarmed by it. We are living in two worlds at the same time: one with ultra-connectivity where we can access everything easily, and one with scattered attention and inability to get hold of what in front of us, and thus full of anxieties and loneliness.

I had some good but long readings regarding Internet usage and distraction (great picks from Dr. V; see the reading list below). I read about how people get used to hopping from one link to another, facilitated by the deeply linked Internet, and are no longer able to read deeply and concentratedly. It is not uncommon that someone ended up browsing the research areas of HCI program in CMU, while initially she was about to start a HCI course on Coursera with Dr. Scott Klemmer, with multiple tabs open in the browser displaying the trail she just went through. And it was too bad that the limited time she had for the course was up and she had to close all the tabs without watching a single course lecture. Yet you know it was not too bad that she at least didn’t end up reading new gossips about some Hollywood celebrities. These easy information hunting might make us “pancake people” who “spread wide and thin” as vividly illustrated by Nicholas Carr.

As these readings themselves are relatively long, it was a good practice and chance of reflection for me. I scarily felt the uneasiness when I realized I need to read such long articles. I had to try really hard to haul myself from checking Facebook and Twitter from time to time, from wondering if the world is still running and if I’ve missed something important while reading. I always know there is a problem with people’s attentions nowadays, but these readings and carefully scrutiny at myself along the reading really forced me to think. What is going on? Who to blame? How to deal with it?

After my trying to comb through the readings and some other things we learned about Web 2.0, I had this map:

Technology and Distraction

Technology and Distraction

It is of course over simplified but it helped me pin point the key joints where solutions might be created to alleviate the issues. In my map, I am an optimist towards technologies. It is technology that expands our vision and capability because ultimately it is our desire to be powerful and probably not many of us would be willing to live an Amish way of life. While it is inevitable that the way we think and the way we consume information will be wired and altered by the capability of technologies, the responsibility of making the best use of technologies is in our hands still. It is information rather than knowledge or wisdom that technologies bring to us. The training needed to turn information to knowledge still resides with us. Education is needed to train people using the Internet in a smart way to harness the benefits and minimize the harm. Design is called to help people regain the ability of focusing.

A therapy is truly needed when many of us are incapable of deep learning and thinking, and paying attentions to people around, because of endless distractions and chaos. No matter one chooses to dive deep in a narrowed lens or spread the time on multiple subjects, the ability of stay focused and generate one’s own thoughts is vital. For UX people, it is detrimental in a special way that the inattention might cause further inability of empathies, which is the bedrock-quality of a good UX researcher/designer.

The first step I am going to try is what Jackson called “effortful control”: reading without access to mobile phone, reading on paper or full-screen on computer, and taking notes while reading to facilitate focusing and later reflection. These were the ordinary way I read before, ironically, I am picking up them again because there might be something the high technology is incapable to offer naturally.

Hopefully, we shall see the “renaissance of attention” coming in the near future and the Tale of Two Worlds will only be a tale to be told.

——————————————–

Here is the reading list (we only read a small part of the books):

Is Google Making Us Stupid? by Nicholas Carr

Distracted by Maggie Jackson

The Laptop and the Lecture: The Effects of Multitasking in Learning Environments 

Cognitive Control in Media Multitaskers

The Distraction Addiction by Alex S Pang

Functional Specs 101

I am taking a course on production pipelines and project management. Knowledge of project management is a great extra to add to skill sets of UX researchers/designers, who are going to serve in a product team and work closely with other functions such as UI designers and developers.

As the UX lead in the course project, I learned quite a lot about the streamlined product development cycle. We as a whole team went through the confusions on procedures of creating Functional Specs, and how to integrate other UX research steps such as user research and wireframes into the writing of Functional Specs.

Here, I am going to share my understandings and learning notes based on several readings and class practices on Functional Specifications (Functional Specs) basics and its role in project executions. I am trying to achieve this goal by asking the following 3 questions:

What are Functional Specs?

In a nutshell, Functional Specs are documents that specify the “what” and “how” of a product – What is the product set out to do? How does the product look like? How do users interact with the product? What are the technologies to achieve the functions? It should include the purpose, look, and behaviors of the application.

Why we need Functional Specs?

  • Function Specs serve as roadmaps for product development. By nature, it provides both the landscape and all the necessary details of a product, which meet the requirements of stakeholders, to the developers. Thus, I view Function Specs as the bridge between frontline-clients/users and backstage developers, groups that are vital in product design but might not have a chance to direct communicate with each other.

Function Specs as a Bridge

  • Function Specs help to streamline the product development process. Through writing Function Specs, the product team can gradually move from chaotic information-gathering stage (with ambiguous and different understandings/inputs)  to reach an agreed vision of the product.

However, this happy ending needs to be built upon quality procedures to create the Functional Specs.

How to write Functional Specs?

  • Do the research to define the product.

This stage is what most UX researchers are familiar with. At the early stage of product development, researches are needed to generate a clearer definition of the product. I view this early-stage research system as a bi-directional system: top-down research approach and bottom-up research approach.

By top-down approach, I mean conducting comprehensive analysis on precedent similar products to (1) avoid reinventing the wheel; (2) get a quick understanding of the product through a short cut.

Bottom-up approach, on the other hand, requires more time and efforts to learn the target users as well as communicating with clients to define the product. User-centered design methods such as ethnographic study, contextual inquiry, interview & focus groups, and task analysis can be used to this end.

  • Create the designer model / represented model.

After gathering and analyzing all the data, models can be used to represent research findings. As pointed out by Allan Cooper in About Face 3 (a must-read for UX designer/researcher), there will be 3 models in a product: users’ mental model, represented model (or designer model), and implemented model (or programmer model).

Users’ metal model represents what users see and understand in front of a product – a perceived product.  This can be studied and represented as persona and persona-based scenarios.

Implemented model or programmer model is the actual mechanism that runs the product, mostly only understood by programmers.

The huge gap between users’ mental model and implemented model is bridged by represented model (designer model). It is through this represented/designer model that users interact with the product. Thus, Cooper pointed out that a good design is having a represented model as close to users’ mental model as possible.

Mental Model, Represented Model, and Implemented Model by Allan Cooper

Mental Model, Represented Model, and Implemented Model by Allan Cooper

Since the represented model is the exact layer we are designing, it becomes the focus of the Function Specs writing. Meanwhile, its bridging nature also means that whatever we are designing on this layer, we should always keep users’ mental model and technical limitations in mind and open the communication channel to stakeholders and programmers.

  • Design the information architecture.

So, when we focus on the represented model, what exactly we should consider? The first and foremost step, as what we learned in sketches, is the architecture of the application. Try to think about these questions: What are the key pages users need to visit? What is the function of each page? What elements and contents should go to each page? After answering these questions, we will be able to establish the structure and flow of the information.

At this stage, flowcharts, interactive prototypes, and wireframes can be very helpful to organize thoughts, represent results, and provoke discussions. Wireframes are extremely helpful to gather stakeholders’ feedbacks on functionality and architecture of the product because it strips out distractive design elements entirely. The core objective at this stage is to get the represented model onto paper, in the form of flow of key pages and navigation design.

  • Design documents

Design documents can be viewed as pre-Functional Specs documents. We can put all we have together and document all the feedbacks. Based on this, more detailed Functional Specs can be built. A lot of iterations will happen at this stage, including redesign of information architecture, visual appearance, and detailed interactions. Through several iterations, the team might be able to get to a clearer view of the product and reach final consensus.

  • Functional Specs

Here comes the final step. To this end, we need to write up the Functional Specs to put everything we’ve been discussing and improving on the paper. This again, makes sure that our clients are aware of and agreed on what we are going to built and the programmers have the “Bible” that they can refer to. More technical requirements (say, which development technologies to use) might need to be discussed with programmer representatives.

So, in the end, what makes good Functional Specs? Check if the Functional Specs have following characteristics:

  • Blueprint. The Functional Specs should give an overview of what this product is about and who are the target users. This helps to build the consensus among the whole team. However, don’t include unnecessary research data (especially those “raw data”) to confuse and overwhelm programmers. A clear table of content is also very important to facilitate a holistic understanding towards the scope of the product.
  • Exhaustive details. Try to include all the teeny-tiny interactions going to happen in the app. Company the explanation with corresponding screen shots. This could be harder than we thought – there might be a lot more “what-if” situation that we didn’t fully consider, which could cause a lot of confusions and communication cost once deployed to programmers. Practice and a good eye is needed.
  • Consistent and concise writing. Use the same design language throughout the document. Don’t use two terms to refer to the same elements. For example, do not randomly use “drop-down list” and “drop-down menu” interchangeably in the same document.

This learning notes are based on class materials, and some more readings, especially the following two. They provide clear and in-depth explanation on Funcional Specs, with great examples.

Functional Specs Tutorial by Allen Smith

Painless Functional Specs Tutorial by Joel Spolsky

Next time, I will briefly review the Agile development model and discuss how we adopt it in our projects.

Web 2.0: A Potluck Party

I am a foodie, so I see things a little bit differently.

Last week, we read O’reilly’s article on definitions and characteristics of Web 2.0. Let’s briefly recap the 7 core features of Web 2.0 O’reilly listed in this article:

  • Web as a platform
  • Harness collective intelligence
  • Data is the next Intel inside
  • End of software release cycle
  • Lightweight programming
  • Multi-device
  • Rich UX (not necessarily “good” per se)

After reading, I had a strong feeling that definition is so important – I often refer to the Internet as “Web 2.0” with a very vague idea of what it really is and how it is different from its precursor (i.e., Web 1.0). Now, I have a much better understanding and reliable criteria that can help me to judge whether a digital product belongs to Web 2.0. That’s the power of a good definition and principles. At the same time, I also felt the need of a good analog, as it is easier for us to understand and remember relatively abstract ideas. So, as a foodie, I came up with this “potluck” analog for Web 2.0.

Why potluck? Because potluck means that everyone in the party can bring their food to the party, being a food provider and consumer at the same time. You don’t ever need to wait for the big Gatsby party, with a pre-determined party time and food prepared by a famous party host. Potluck party is really a platform for each individual to present their food, which makes the party much flexible, diverse, and light-weight. At the same time, you can tell the trend of cuisines as well as the season in a big potluck party: peaches are served in many dishes? it’s highly possible that the summer is here. Almond flour is frequently used instead of all-purpose flour? Gluten-free must be in vogue. Last but not the least, you got exposed to very rich flavor of different dishes/choices (e.g., Chinese dishes, Mediterranean dishes, and South American dishes at the same time), but no one guarantees good tastes across the big party.

It’s hard to say no to such a comprehensive and fun party. No matter big or small, you usually bring something to the party (could be an appetizer, a main dish, or a dessert). However, it’d better be a good piece of your work: once it is in the party, people will take a look at it and taste it; so be sure to prepare and offer something proper and nice, and don’t get your personal brand as a good cook smeared.

Reading Notes on Information Dashboard Design – Part 1

As we are having extensive dashboard design brainstorm meetings going on these days, it is especially beneficial to read this insightful and well-written book written by Stephen Few. I would like to share some take-aways from first 3 chapters that I’ve read.

The first 3 chapters offer more general information regarding information dashboard design with extensive examples, while the rest 5 chapters provide further instructions on solving several important design issues.

The first thing discussed is the clarification of the idea of information dashboard. After examining some existing info dashboard products, he came up with a definition of information dashboard (p. 34):

A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.

Notice how this definition can be decomposed to 4 meaningful elements, each could enlighten us on many design considerations. From the definition, I can see at least the demands on understanding visual perception, and users’ needs.

Bottom line here is: dashboard is NOT a technology but rather a piece of design that aims to communicate, and “the limited real estate of a single screen requires concise communication” (p. 44).

Secondly, Stephen introduced different categorization systems of dashboard. The one most relevant to visual design is the categorization based on the role of the dashboard: strategic role, analytical role, or operational role.

  • Strategic role (e.g., CEO needs the overview of the operation status of the company): high-level measures / no real-time data / no interactions to support further analysis
  • Analytical role (e.g., our DIA2 product): demands greater context / interactions with data / link seamlessly to other means to analyze data
  • Operational role (e.g., monitor machine operation and take action when necessary): dynamic nature, real-time data / grab attention when need immediate operation

Our project clear fits best to “analytical role”, which requires a good mechanism to provide more contexts to the data, and enable comparisons, extensive historical views, & interactions with data to drill down.

Last, in Chapter 3, Stephen gave a list of 13 common mistakes in dashboard design:

  • Exceeding the boundaries of a single screen

NO separate screen or scrolling, which ruins the benefits of monitoring information “at a glance”.

  • Supplying inadequate context for the data

Just as what we discussed in the brainstorm meetings, the budget amount should be offered with   other information, otherwise the number won’t mean anything for the users.

The difficulty here is to show meaningful contexts without introducing distraction.

  • Displaying excessive detail or precision

E.g., displays $98,978,407.78, while it should be $98,978,408 or $99M.

  • Choosing a deficient measure

What to show with what unit? E.g., let users compare the amount or show the percentage change instead?

  • Choosing inappropriate display media

What type of chart or graph to use?

E.g., Stephen is strongly against pie chart: hard to compare 2-dimensional area or angle.

  • Introducing meaningless variety

Always use the display that works best. Users won’t get bored because of this.

  • Using poorly designed display media

E.g., unrecognizable color differences, 3-D bar chart, and distractingly bright color.

  • Encoding quantitative data inaccurately

This introduces mis-interoperation of data.

  • Arranging the data poorly

With a large amount of data to show in a limited space, it is important to place information based on importance and desired viewing sequence. This is why we discussed about what information our persona Matt wants to see first.

Also, design and place information in a way of encouraging comparison.

  • Highlighting important data inefficiently or not at all

Don’t make everything visually prominent, or users won’t know where to look at first.

  • Cluttering the display with useless decorations

E.g., background images, and other distracting ornamentations.

  • Misusing or overusing color

Color should not be used haphazardly.

Also, don’t reply purely on color to convey information: this excludes color blinded users (10% of males and 1% of females).

  • Unattractive visual display

Simple but hard to achieve: don’t make it ugly.

RAA: A recent study on credibility of tweets

RAA stands for: Research Article Analysis

Paper discussed:

Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing?: understanding microblog credibility perceptions. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, CSCW  ’12 (pp. 441–450). New York, NY, USA: ACM. doi:10.1145/2145204.2145274


As I was doing a class paper regarding use of Twitter and self-presentation on Twitter, I found this newly published article quite interesting. In the age of information explosion, people rely more and more on personalized information channels with fast information updates to feed themselves with fresh news. Twitter, combines with multiple searching platforms, becomes ideal medium to provide useful information. Meanwhile, credibility issue rises up as people consume more and more tweets. This study took a look into elements that affect tweets credibility.


1. Purpose of the research:
Understand features that affect readers perceived credibility of tweets.


2. Methods:
A mix of survey and experimental studies were conducted to achieve the research goal. Survey was firstly used to gain the general perceptions of Twitter users on tweets credibility. Experimental designs were carried out later to focus on testing 3 core and most visible features (message topics, user names, and user images) reflected from survey results.


3. Main Findings:
People were poor at judging the truthfulness of tweets based on contents alone; instead, they inclined to use available heuristics, such as user names and user images to assess credibility of a tweet. For example, a default Twitter user image decreased the tweet contents credibility as well as author credibility, while a topically related user name (e.g., LabReport) increased credibility compared to an internet name (e.g., Pickles_92). These findings had great implications to both individual Twitter users who want to enhance their credibilities, and UI designs of search engines, which also has desire to increase perceived credibility of searching results.


4. Take Aways:
Besides the research finding itself, there are 2 points that I found interesting and useful for my future research:
(1) A very clear and persuasive background section
This paper provided a very clear and strong argument for the need of the study. The background regarding credibility study on Twitter was mainly composed with 3 parts:
  • Concerns about credibility do exist, but no one studied what features contribute to it. — served as a gap needs to be filled.
  • A study about Twitter user name existed, which studied the relationship between user name and tweets’ level of interestingness. — served as a step-stone that this research can build upon.
  • There are systems to automatically / semi-automatically classify tweets credibility through combination of crowdsourcsing and machine learning. — served as an application which this research can help with.
These 3 arguments triangulate each other, building a solid ground to claim the desire and value of this study.
(2) Snowball sampling in social computing research
In the experimental study part, the authors claimed that recruiting participants through advertising to their own followers was undesirable, due to the drawback of snowballing sampling strategy. This gave rise to my curiosity since though I knew the definition of snowballing sampling strategy, I never use it before and I didn’t know its drawbacks either. I referred to the citation the authors gave here, which is [Bernstein, M. S., Ackerman, M. S., Chi, E. H., & Miller, R. C. (2011). The trouble with social computing systems research. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems, CHI EA  ’11 (pp. 389–398).]. In this CHI 2011 paper, the authors gave some theoretical framework to help with social computing system research. Regarding snowballing sampling strategy, this paper actually acknowledged the weakness of it as “the first participants will have a strong impact on the sample, introducing systematic and unpredictable bias into the results”. However, the main point of this paper was to suggest researchers to embrace snowballing sampling as it is “inevitable” due to 3 reasons:
  • The nature of social computing is: information spreads through social channels.
  • Random sampling is an impossible standard for social computing research because influential users exist to bias the sample.
  • Many social computing platforms are beyond the researcher’s ability to recruit random sample.

Thus, we might be able to acknowledge that snowballing is not an ideal sampling strategy but inevitable in some sense in CHI research. We should fully aware of its danger of bringing in biased sample, and use it wisely. In this credibility paper, the authors recruited participants from Microsoft and Carnegie Mellon University, which are the organizations they belonged to. This sample do include some degree of diversity but also has its own bias. As the authors pointed out, some other demographics that consume tweets were not covered by this recruitment method. Overall, a biased sampling might be inevitable for social computing research, it is the researchers’ call to choose from different sampling methods based on their research questions and maximumly minimize the bias in terms of answering their research questions.

Reading notes about Lean UX

I came across the concept of “Lean UX” from a Chinese UX blog, talking about how to simplify and cut unnecessary UX design process to face the rapid updating market. The traditional UX development, as we learned from most courses, are recognized as deliverable-based process. UX researchers/designers are supposed to render different kinds of deliverables. However, the whole process with fine reports requires a relative long period (typically several months) to define the requirements of the products. This would be a great risk for IT companies nowadays, putting them in a position that the product might already out of date when developed. Also, there is a great waste of time and deliverables that could not be directly turned into final experience. To solve the problem, Lean UX was brought up with following features:

  1. Cut completed documentations to bare components necessary for implementation.
  2. Split long design process into short, iterative, and low-fidelity cycles; gather team-wide suggestions during iterative cycles.
  3. Stop pushing pixels, pick up whiteboards, pencils, papers, or even napkins to convey early ideas of workflows.

There are several benefits of Lean UX: the entire team could get more involved into the design process, and gain the sense of owner through this process; stakeholder could get more exposed in an early stage; cost is low for improvement and redesign. Drawbacks of Lean UX are obvious: designers might loose control of the design through the iterative cycles, with constant input from the entire team. This requires UX designers have big visions of the products to hold or approve different suggestions.

Article read: Jeff Gothelf’s post on Smashing Magazine.

[Reading Reflection] From Requirement to Design

Reading Material:

Sharp, Rogers, & Preece (2007) Interaction Design. Wiley. Chapter 11.

Cooper, Reimann, and Cronin (2007) About Face 3. Wiley. Chapter 7.

These two chapters both talk about converting requirements into real design through framework/ prototype. This is done by applying knowledge gained through previous stage — persona-scenarios and system requirements — to establish the form, input method, and functions of the product. Both chapters emphasize heavily on using low-fidelity prototype at the beginning stage, to encourage discussing, seeing big picture, and trying multiple alternatives at this stage. It is a pleasure journey reading these chapters, seeing how they evolve paper-based sketches to computer-based prototypes ready for user testing. I love the idea of storyboarding talked in both chapters. It reminds me of a class project I did about designing a personal digital health record system on iPhone. My teammates and I build up the interaction process using Powerpoint, mimicking the screen change when our persona interact with the product. It was surprisingly useful to build in the task-oriented key path scenarios, and it also did a great job conveying our design to fellow classmates.

I like Cooper’s structure more because it follows clearly with the serial pathway of developing the design, from low-fidelity framework using whiteboard sketch, to upgrade it to computer-based tool with more details gained from “key path scenarios”; and later, the combination between interaction framework with visual design framework and industrial design framework.

Compared to Cooper’s, Interaction Design gives a “parallel”-like structure: main aspects of prototyping is composed as different topics, in which more detailed comparison between different concepts and methods are given. For example, they talk more about the pros and cons of low-fidelity prototype and high-fidelity prototype, also the difference between product-oriented and process-oriented conceptual model. I love these discussions in terms of giving more deep understanding of what methods we should use and why.

Based on these features of the two books, I would suggest to read Cooper’s first to get a clear big picture and great details of how the whole design process looks like, and what stages are there to compose the work. Then try Interaction Design to extract more insights about some specific parts of the design process. See, this is also similar to the way you build the framework: skeleton first, details later.