cover image

Empowering user research with Generative AI

Project details

Arthur Lee
Bootcamp
Published in
6 min readSep 14, 2023

--

  • Role: Team lead
  • Team make up: 3 designers(including myself), 1 technologist
  • Duration: 1 week for the hackathon, from August 4th to 11th

Introduction

Recently, IBM had an internal hackathon called the IBM WatsonX challenge. It allows IBMers to learn how to bring value to clients with WatsonX machine learning models. As the team lead, I charted the team’s course, facilitating team decisions as well as contributing to the technical solution and refining the design solution. With everyone having busy schedules and only 1 week for our submission, we had our work cut out for us. Although we can’t share our technical solution, we can talk about our concept.

To address the elephant in the room, our concept is very similar to Figjam’s Jambot. This concept was done at the start of August before Figma announced Jambot. Its use cases are more general while our’s are more specific to user research. This applies to Mural’s and Miro’s AI powered solutions as well.

The technology we leveraged

Although we can’t share the exact technical solution used, its approach is similar to that of libraries like llamaindex. That approach is:

  1. taking a user provided knowledge base such as .txt, .csv and other documents
  2. indexing it
  3. when a prompt is provided, the relevant chunk of data from those files is fetched and fed in with the rest of the prompt.

This means that the large language model(LLM) would have a good amount of context to answer the question. This method is called retrieval augmented generation and is purely prompt engineering. It does not use any training or tuning of any kind. Although the LLM has context to answer the question, the result can be further improved. To do just that, I learnt some prompt engineering techniques from learnprompting.org. These are role prompting and few shot prompting.

Role prompting

Role prompting example adapted from learnprompting.org
Example adapted from learnprompting.org

Role prompting is a technique where you instruct the LLM to embody a specific role, in our case, “Imagine you are a UX researcher”. This helps to control the style of AI generated text, improving your result.

Few shot prompting

few shot prompting example adapted from learnprompting.org
Example adapted from learnprompting.org

Few shot prompting is essentially just including some examples, or “shots”, of what you want the LLM to do in your prompt. This allows the LLM to learn from these examples and improves the quality of your output.

The concept

With the underlying tech in mind we came up with our concept and we wanted to target specifically user research. This was chosen based on interest and was inspired by this article.

The problem

If you were to ask ChatGPT to generate a user persona and even gave it an example of a user persona, it would generate a plausible one. However, this is based on the scraped internet data it was trained on. It is not specific to the users for the problem you are trying to solve.

The vision

Now that we know that an LLM can answer questions based on a user provided knowledge base, we wanted to provide the power of an LLM but have it be specific to the user research a UX researcher(UXR) has carried out. It does not replace UXRs. They still have to do the rigor of actually carrying out the research. Instead, it speeds up the UXR’s analysis process and gives them a draft to work off of.

You might now think that I am proposing a ChatGPT for user research. If we were to do that, the UXR would need to know prompt engineering techniques(like the ones I explained previously) to get reliable outcomes.

We want to do better.

The solution

Our solution is called Mural+powered by Watsonx. It is an AI tool integrated in your favourite white-boarding app. In this case, we chose Mural for our example.

How it works
How it works

The user would choose the new Watsonx tool, and a modal would appear prompting them to upload any relevant files to form the knowledge base. Subsequently based on the data, the Watsonx tool will suggest research artefacts that can be generated from the files ingested. Subsequently it is generated within Mural itself. It is editable and serves a first draft for the user researcher.

Chat when you need it

Users can also ask specific questions based on the data provided as well
Users can also ask specific questions based on the data provided as well

Sometimes, instead of generating research artefacts, you might want to ask specific questions. Therefore, we still provide a conversational interface to cover that use case as well.

Bringing the concept further

To take the concept further, here are some ideas:

  • Showing the sources and how the LLM arrived at the answer. This would allow UXRs to verify the validity of the output generated.
  • Where needed, we could pull images from vision models like Midjourney or Adobe Firefly, or even just stock images whenever the artefact requires it. For example, a persona almost always comes with a picture to visualise that persona. This can be pulled from those sources.
  • Editing the file/data after the output is generated, subsequently these changes could be synced to your output
  • If such an idea were to be implemented, we could tune the model such that it gets good at generating the research artefacts. Subsequently, a model with less parameters and cheaper to run could be used.

Learnings

This project taught me quite a bit. It was really interesting learning about prompt engineering as well as playing with libraries like LlamaIndex. This helped to engage my more analytical, coder brain that I have barely touched since my time studying computer science in university.

As the team lead, I got some practice facilitating the team directions and decisions. One of the new techniques I practiced, was that of the social contract. Before the project starts, we would get an understanding of everyone’s availability, what they want out of the project and any specific rules or guidelines we want to follow as a team. Of course this is followed up with project planning where we break down the task and assign responsibilities in the RACI format. This helped to reduce ambiguity within our team and allowed us to complete things smoothly. We even submitted our submission 2 days before the deadline!

Conclusion and thanks

It was a great learning experience, getting our feet wet designing for an AI use case. Like it or not, generative AI is the future, the faster we embrace it the better. The concept, although good, just scratches the surface of what is possible. It is also really interesting to see how our execution differs from Figma’s Jambot. Jambot took a more general use approach, providing ways to ideate and summarise whats on the Figjam board. This differs from our more specific approach to directly producing a draft of the research artefact.

I would also like to thank everyone in my team, Jeremy, Justin and Waheed for being such gracious and cooperative teammates. Special shoutout to Justin, who did the heavy lifting for our video submission and design solution.

--

--

Product Designer. Has a passion for all things design and tech. Personal website: arthurleeyk.com