Generative AI + UI = Generative UX

Imagine a product where the menus, toolbars, and settings are content rather than framework. It should be easy to do: For over 25 years we have been using browser based apps for email, collaboration, documents and spreadsheets to enterprise SaaS solutions. The “interfaces” for all these products are already “content”, delivered in the same way this article is displayed in a browser alongside the tools someone determined you needed.

What if “applications” were generated from a prompt. Imagine getting a text message from your boss asking you to track a set of new key performance metrics from your partners and to check those against industry standards and break everything out by geography. And be ready to share that information with them at a partner summit next week. Now rather than setting up meetings with your IT department and each of your partners to talk about the various integrations, collecting and normalizing data, and building a dashboard, etc., you simply spoke to your device and told it to make an app.

An agentic enterprise could create a set of orchestrated agents. Starting with the Orchestrator Agent a.) to trigger a Salesforce Agent b.) to identify your top partners, while also initiated a Data Integration Agent c.) creates data pipelines and autonomously negotiates access to each customers data source. For those customers it cannot access autonomously, it request the Orchestrater Agent a.) to trigger a Scheduling Agent d.) to message your partner contacts with the request for the data, and if that fails to resolve the request, then the Scheduler Agent could set-up a meeting to discuss the required access. While the Data Integration Agent c.) is extracting and transforming your partner’s data based on geography, a Internet Search Agent e.) would search the web and collect industry standards for comparison, adding that data to your data set. Once the data has been collected, the Orchestrator Agent a.) can trigger the Aggregation Agent f.) to synthesize the results and pass them on the Presentation Agent g.) to create a dashboard that it can add to your partner summit assets. Throughout the process, the Orchestrator Agent a.) would provide you with updates and the opportunity to refine, change, and add additional data sources, analytics or visualizations.

The entire application, and the user experience, with the data integrations and dashboard, could be generated and executed from a single prompt. Once your partner summit concluded you could determine if you wanted to keep that application, refine it, keep it as an pattern, or delete it.

If you treat the user interface not as a fixed set of controls, menus, toolbars, but as a fluid set of operations that can contract or expand based on the user’s actions, GenAI could use intent, subject, context, and behavior to dynamically generate a set of operations organized and tailored to both skill level and working style of the individual. Most of these operations may not have a user interface simple prompts and outputs. We have begun seeing this already with Salesforce’s Generative Canvas, dynamically adapting to the individual user based on their role and task. But this is just the beginning.

This could prove invaluable to addressing the complexities within all enterprise applications; SAP, Workday, Oracle, etc. with their thousands of commands, actions, and comprehensive workflows, optimized to accommodate every scenario across an organization, not the specific tasks an individual is expected to complete. The result is clicking through extra screens, navigating expansive directories, scrolling through long forms, just to find those few fields that need their attention.

Adaptive GenUI products would no longer be constrained by the thinking of a product team in Silicon Valley, they would be generated based on the emerging needs and objectives of the individuals using them in the moment–making them far more inclusive and accessible. GenUI will redefine what it means to design products, allowing the design to be based on the interplay between the user and the collective capabilities of the product's underlying system. In many ways it will be more akin to a relationship; going beyond just rote, surface-level interactions it will rely on a deeper dialogue between the user and the product.

This is more than a simple reimagining of progressive disclosure, it gets to the root of a real problem. Back in 2013, Evernote's CEO, Phil Libin, said their users typically only use 5% of their product’s features, but the problem is that it’s a different 5% for each user. These “5% users” are not unique to Evernote: The latest version of Excel has 486 distinct functions, PowerPoint, 200+ menu commands, Illustrator over 300, and more than 80 tools. Figma is just as overwhelming. And don’t even think about the SAP or Oracle enterprise suites with their dozens of business applications, each with hundreds of functions and features.

Applied to Adaptive User Experiences, GenAI could be used to:

  • Generative Disclosure: Progressive Disclosure has long been used to help manage user’s learning curves. To reduce the cognitive load and let users focus on their tasks, generating controls, along with any necessary explanatory information, when and where they are needed. Enabling a more hands-on approach, with AI’s support users would be able to learn by doing without the burden of needing to develop a complex “mind palace” for an information architecture they will never fully use.

  • Task Analytics: GenAI can track the collective behavior of users, identifying patterns that accelerate task completion. Based on this data, GenAI can offer the optimal set of functions, tools, shortcuts, and best practices helping users complete their work efficiently and accurately.

  • Virtual Coach: offering real-time assistance through a conversational interface, users can ask questions or seek clarification while working, and the AI provides immediate responses, mimicking the experience of a human tutor.

  • Generated Procedures: step-by-step instructions, based on the user's intent and context, assembling key features and setting up workflows, guiding them through each step, ensuring they have access to the full functionality and tools needed for their specific task.

  • Compound Controls: GenAI can concatenate a sequence of interactions into single commands or even automate repetitive tasks on behalf of the user. Partnering with users to build personalized interactions based on routines or best practices, GenAI can pull together data from various sources and deliver the weekly status report, or quarterly forecasts.

Such a system would truly be human-centered, given the experience is defined by the individual users’ needs, allowing them to focus on their objectives, decluttering their interaction and letting them focus on completing their activity. Coupled with recommendations from personalized learning users could find new ways to express themselves,  to problem solve, and to have the  confidence to be creative.

Previous
Previous

The Agentic Enterprise

Next
Next

Design+GenAI: Redefinition