Generative AI + UI = Generative UX

Imagine a product where the menus, toolbars, and settings are content rather than framework. It should be easy to do: For over 25 years we have been using browser based apps for email, collaboration, documents and spreadsheets to enterprise SaaS solutions. The “interfaces” for all these products are already “content”, delivered in the same way this article is displayed in a browser alongside the tools someone determined you needed.

What if “applications” were generated from a prompt? What if software was created on-demand? Imagine getting a text message from your boss… She asks you to track a set of new key performance metrics from your partners, check them against industry standards, and break everything out by geography. Oh, and be ready to share that information next week at a partner summit. Historically that would require a team of people working late nights and the weekend.

Now image rather than setting up meetings with your data analysts, IT department to collect and normalize data, building dashboards, etc., not to mention tracking down each of your partners to negotiate access to their data. You simply had a conversation with your “device” and told it what you wanted?

What if the dashboard simply manifested in front of you and you could either tap and drag its elements to explore what-ifs? Or you could simply ask your “device” to play through various scenarios?

The entire application, and the user experience, with the data integrations and dashboard, could be generated and executed from a single prompt. Once your partner summit concluded you could determine if you wanted to keep that application, refine it, keep it as an pattern, or delete it.

No more more windows, no more menus, no more mouse. No more restrictions of 50 year old conventions defining your experience. The challenge will be mastering your library of LLM’s and the design languages of your expressions. Indeed, in all likelihood there will emerge the concept of a Design Language Model; like a LLM but this will be a model based on manifestations, visualizations, colors, fonts, materials, patterns, and styles for best expressing the person’s intention.

Back in 2013, Evernote's CEO, Phil Libin, said their users typically only use 5% of their product’s features, but the problem is that it’s a different 5% for each user. These “5% users” are not unique to Evernote: The latest version of Excel has 486 distinct functions, PowerPoint, 200+ menu commands, Illustrator over 300, and more than 80 tools. Figma is just as overwhelming. And don’t even think about the SAP or Oracle enterprise suites with their dozens of business applications, each with hundreds of functions and features. No one truely uses all the capabilities of these systems. Yet they are faced with navigating these complex information architectures each time they want to try something new.

If you treat the user interface not as a fixed set of controls; menus, toolbars, icons, etc., but as something fluid that can contract or expand based on the user’s actions, either gestural or verbally expressed, GenAI could use the intent, subject, context, and behavior to dynamically generate a set of articulations organized and tailored to needs and context of the individual. Most of these articulations may not have what we think of as a classic user interface, they may simply being interactive artifacts that allow for manipulation directly or through prompts. We have begun seeing this already with Salesforce’s Generative Canvas, dynamically adapting to the individual user based on their role and task. But this is just the beginning.

Generative UX changes the nature of interaction design and could prove invaluable in addressing the complexities within our lives. We could simply speak, gesture, or perform a behavior to not only generate the articulation but to have it adapt to our changing context, persist and engage with other articulations—either our own or those of others.

Adaptive GenUX would no longer be constrained by the thinking of a designer or product team in Silicon Valley. They would no be constrained by a design system comprised of menus and buttons. They would be dynamically generated based on the emerging needs and objectives of the individuals using them in the moment–making them not more relevant but also far more inclusive and accessible.

GenUX will redefine what it means to design products, allowing the design to be based on the interplay between the user and the collective capabilities of the product's underlying system. In many ways it will be more akin to a relationship; going beyond just rote, surface-level interactions it will rely on a deeper dialogue between the user and the product.

This will fundamentally change the role that the designer plays in the development of capabilities to create these expressions. Designers who define their value on their hand-skills, will be marginalized. Designers who don’t understand how to define design languages will be left out. This is more than a simple reimagining of progressive disclosure, or setting parameters for a chat bot’s personality, this more akin to terraforming or game design.

Designers need to reach deep into their understanding of orchestration, organic growth, movement, transformations taking place in space and time, rather than 2D, 2-¹/₂D, or even 3D.

Designer need to focus on building DLMs: Design Language Models.

Previous
Previous

The Agentic Enterprise

Next
Next

Design+GenAI: Redefinition