Ux Design Life-Cycle, Part 7: Deliver & Test the Design

AdobeStock_145161091-[Converted].jpg

Background

We are a small team of talented creatives and developers, answering the question: “...tell us about your process or design principles.”

We are going to do something far more valuable: write about what has worked and what has not worked.

In the weeks ahead, KRUTSCH will post a series of bite-sized articles that encapsulate a decade of experience leading the life-cycle of product design and user experience (UX), across a variety of industries, with clients both large and small, including consumer and commercial projects.

This is Part 7: Deliver and Test the Design. Introductory post: UX Design Life-cycle, A Mini-Series.

Follow us on LinkedIn to see the next chapter in your feed.

Testing UX Design

There is no substitute for real-world user testing. Let me repeat that: there is no substitute for real-world user testing. You would think that after decades of designing software experiences, we could just design an app interface where nothing goes wrong or is confusing, everything is a joy to experience, and we are loved and respected for the effort.

To be sure, we are great at designing interfaces and have depth and breadth of experience across a range of application categories. But every product launch surprises us a little. It’s not so much that testing uncovers issues in need of repair, polish, or even re-design; that’s a given for any but the most trivial of apps. No, it’s the simple, passé elements, that we exercise over-and-over internally, that somehow trip-up our end-users and leave us shaking our collective heads. 

Here are two examples:

Case Study: DONT™ SECURITY CODES

Dont, a pair of parent and child native apps, monitors and reports teen phone use while driving to help parents address unsafe behavior. You can read our case study on the design and development process, if you want to know more about this unique, patented app.

During the on-boarding process, we used the mobile device’s phone number as the login identifier, relying on an SMS text message containing a verifying PIN to ensure ownership of the account phone number. 

On Android, this looked as shown below in the screen capture:

Dont - Text Code.png

Works great as a low-friction, on-boarding and account creating technique. You enter your phone number, received a text message, and both iOS and Android “copy and paste” the PIN code into the form; you tap on Verify and you are finished. Couldn’t be easier, right?

A unique aspect of this app was parent-child pairing; the parent on-boards, does the above, and then sends an invitation via a text message to their driving-aged child. The text contains a universal/deep link with account info and re-directs to the correct App Store. After downloading the app, the child does the same PIN verification.

But there was one more step: making sure the parent-child pairing is authenticated. In other words, the parent and child need to be sure they are really connecting to the assumed person on the “other side” of the cloud.

Early in the design/test process, we actually had a user forward the invite text to his girlfriend, who then promptly “joined” her boyfriend’s family for safe teen driving. Just the “hack” we wanted to know about before launch, so let’s fix that problem.

For a solution, we borrowed from the banking system. When you are both speaking with a banker on the phone and looking at on-line information from the same bank, sometimes the web app will present a pair of codes – one for the banker to read to the customer and vice versa. So, we implemented something similar for pairing the parent and child in the dont app:

Dont - Pairing Code.png

You can see “Your code:” and the confirming child code (this is the teen-aged driver’s example screen). We even thought we were being clever by adding a “Call” button to make it easy to call mom or dad and finish setup.

After we launched both apps in the Apple App Store and the Google Play Store, we ran a user study with real parent-teen pairings with users recruited by Userlytics.

We reviewed the recorded footage and discovered that the teens were getting stuck on the Security Code matching part of on-boarding.

Wait, what? The kids are getting stuck and not the parents?

Here is what was happening, in some cases:

1)     Parent on-boards and sends invite text message to child;

2)     Teen may or may not respond to the text message (kids, right?);

3)     Parent calls the kid and says: “Hey, did you click on that link? Get moving…”;

4)     Teen taps in link, downloads app, goes through PIN verify, closes the app and stops;

5)     Parent calls the kid again and says: “Hey, I need you to verify the codes…”

The problem? The parent says: “my code is 987 654; what’s your code.” The kid goes back to their default message app, looks at the PIN verify message and reads that one, not opening the app and looking at the Security Code screen that presents on opening the app.

If you are curious how we addressed that issue, you can download the app and try it out.

Case Study: SEE A STAR® PHONE RINGING

This example is one I will be telling for years, when asked about the value of real-world user testing.

See A Star allows fans and stars to video chat, either live with a unique, patented process called Meet Now, or by scheduling a session. Think of it as Zoom with a paywall, integrated into social channels for live meetings and sharing of video snippets.

To get started, the See A Star team worked with retired Minnesota Vikings. We performed our initial customer study with a group of Vikings Legends, most notably Chuck Foreman, as well as others. We built early prototypes and brought retired players into our office for “lab testing” of various design concepts.

We have writing the past about the dangers of “lab” testing; you can read about early experiences with in-house testing and the pitfalls that await.

But, we did it anyway, mostly because the apps were not far enough long to deploy to real users outside of our office (i.e. we had to guide users around un-implemented portions of the app). And, of course, this kind of testing always appears to yield great results; mostly because the test subjects are under your watchful eye, are paying close attention and are being careful not to make mistakes (as described in the above blog post).

Fast forward… we launched the apps into the wild and started noticing something with the retired NFL players: they were missing scheduled calls. Fans were contacting us and saying: “the player never showed up for the meeting!”.

Scheduling a 2-way video chat was modeled around existing systems, like GoToMeeting and Zoom. The fan browses the collection of stars, picks one, chooses a time/date for the video chat and pays.

Afterwards, e-mails are send out to both the fan and the star, looking something like this:

SaS - Meeting Invite.png

A calendar entry is made on both sides, with notifications set, as well as a reminder e-mail prior to the scheduled call. The fan would see a calendar notification, just before the meeting time, would tap on it, opening the app and taking them to the waiting room, looking something like this:

SaS - Join The Call.png

The star, in the case a retired Minnesota Viking, would also see the calendar notification – maybe on a laptop or their iPad – and would pick up their phone, grab a seat in a chair, and… wait for the call.

When these fan reports starting coming in, we would contact the retired player and ask: “what happened?”

Player: “I had my phone sitting right next to me, and I was waiting for the guy to call, but the phone never rang!”

The phone never rang.

Takeaway

Never compromise on real-world testing of the user experience.

Developing, Testing, and Delivering an App

"Here’s what this could look like:"

This is how we start every discussion around development, testing, and delivery. This conversation addresses how our practiced models can be adapted for each client. Going from concept to delivery, on-time, and on-budget, requires a lot of compromise. Having been on the other side of the aisle, we know what it takes to build and ship a complex, commercial product. We know where the opportunities for compromise crop up and our models of “what this could look like” take these decision points into consideration, ensuring the best fit for each client’s team, resources, and priorities.

Here are some common models we’ve used with clients:

Development

They build it. They have the resources to take our designs, prototype, and documentation and get to work. Only a handful of our clients choose this option because, well, do you know many product teams that have too much time on their hands?

We build it. We’ve used a handful of models here, tailored to each client’s needs.

Front-End Reference Pages

Rather than delivering static documents as a “style guide” for the design, we provide working code that realizes the designed front-end user interface. We find this works better as it frees up our clients to focus on their product’s core competency, rather than trying to stay up-to-date on the latest front-end technologies. The client then takes the reference pages and integrates them into their back-office system.

Full Stack Implementation

In this model, our developers integrate the Reference Pages into code that we develop for our client's back-office (either in a public cloud or an internal data center). At some point, both the front-end code (Reference Pages) and back-office additions are handed off to the client development team for continued maintenance and enhancement.

System Testing

Once the front-end code has been integrated into the back-office services, we test the complete system. 

We perform functional testing on mobile and desktop platforms. Our team likes to use standard frameworks that are stable and optimized for portability such as Bootstrap, Microsoft’s Fabric, or Google’s Material Design for styling. Angular is our go-to for a client framework paired with AWS as a back-end server. Using Browserstack, we restrict testing to a combination of common browsers/platforms and versions such as Safari, Chrome, Internet Explorer, Firefox, and Edge.

We maintain a bug list using either our internal Airtable Bug Tracker template, or using the client’s preferred method.

Proctored Usability Testing

While bugs are being addressed, we set up and proctor a limited usability test on a “near-final” iteration. We observe users interacting with the new site/app based on a set of prioritized tasks we’ve asked them to complete. We then review our findings, create a summary of results, and recommend and execute moderate design changes before deploying.

Delivery and Deployment

Regardless of which model the client chose, we provide full documentation to build, deploy and support the developed application. In general, our build and hand-off process works as follows:

1.     We develop and test in a KRUTSCH BitBucket Repository and AWS sandbox.

2.     We track feature backlog and major defects as described above.

3.     Perform mobile and browser testing, as described above, internally.

4.     When ready for a release, branch and tag in the KRUTSCH BitBucket.

5.     The client pulls from our BitBucket, integrates, builds, and deploys to their own back-office servers or cloud.

A client often provides a development lead that will sit in on developer discussions, routinely pull down and test code, and execute bug fixes in tandem with our development team. This ensures that the client can quickly familiarize themselves with the code, effectively take ownership of the product, and easily complete future maintenance. On-going maintenance works in much the same fashion.

End Note

This is the final part of our 7 part mini-series on the UX Design Life-cycle. Read the full series here : UX Design Life-cycle, A Mini-Series.

Follow us on LinkedIn to see the next chapter in your feed.


Ken Headshot.jpg

Ken Krutsch is Founder & President of KRUTSCH, a digital product design firm, specializing in commercial product vision and implementation, including customer research, roadmap development and full-stack user experience design.

Follow KRUTSCH on LinkedIn to read the follow-up posts in the series.

Emma Headshot.jpg

Emma Aversa is a visual designer at KRUTSCH, with a background in architectural design. She excels at digital design and communication executed through simple, function-forward designs.