After leaving Xinova, I co-founded Zeitworks on an insight that had been forming throughout my time building AI systems for knowledge workers and innovation networks: most organizations had no objective understanding of how work was actually being performed inside their own walls.
Not a rough sense. Not a managerial approximation. No ground truth whatsoever.
They were investing heavily in automation without knowing what they were actually automating — or whether it was worth automating at all. The processes documented in procedure manuals bore little resemblance to the workflows that had evolved organically across complex, heterogeneous systems over years of real use. Leadership assumed they understood how work flowed. Employees had long since developed their own workarounds, shortcuts, and adaptations that nobody had ever mapped. The gap between the two was where automation investments went to fail.
At Atlas, I had built systems that understood how individual knowledge workers interacted with information across their digital lives. At Xinova, I had built systems that matched the right ideas to the right organizations across a global network. Zeitworks applied the same core insight to a new and more urgent problem: what if you could build a system that watched how work actually happened inside an organization, modeled it accurately, and surfaced the truth that automation investments required — without disrupting the operations it was observing?
That was the founding conviction. And it was right.
Zeitworks
What we built
Zeitworks developed a continuous process intelligence platform that deployed lightweight endpoint sensors across an organization's workforce, generating detailed, real-time process models with zero operational disruption. Integrated with Finance and HR systems, the platform delivered live cost analytics alongside process data — giving organizations not just a map of how work was flowing, but a precise understanding of what it was costing them and where the highest-value opportunities for automation and optimization actually lived.
The platform produced what we called the ground truth of work: an objective, data-driven model of how processes were actually performed, as distinct from how they were documented, described, or assumed to work. For the first time, organizations could target automation investments with precision, optimize onboarding and workflows based on behavioral evidence, and make infrastructure decisions grounded in real performance data rather than internal politics or vendor promises.
Designing for trust
Most process intelligence tools are built for analysts and executives. The workers whose activities generate the data are treated as subjects, not participants — observed without transparency, optimized without input, and given no stake in the outcomes.
We made a deliberate decision to do the opposite.
Zeitworks gave end users full visibility into the data being collected about their work and the process models the system was generating from it. Not a sanitized summary. The actual models — screenshots, keywords, frequency data, time spent across applications, the people identified as performing each process. Everything the system knew, the worker could see.
That decision carried risk. Employees are often skeptical of monitoring tools, and with good reason. We were asking people to trust a system that was watching everything they did on their computers and building models from it. Transparency was not just an ethical choice — it was a strategic one. If workers didn't trust the system they would find ways to work around it, and the data would be compromised. If they did trust it they would become the system's most valuable contributors.
What it delivered
In our first pilot — a four-week engagement with Johnson Financial and West Monroe Partners — Zeitworks deployed sensors to 200 associates across 164 distinct roles. The platform analyzed 1,468 activities against 66 performance benchmarks. The result: $28 million in identified improvement opportunities and a clear path to a 48% efficiency ratio for the bank.
In four weeks. With zero disruption to daily operations.
We went on to deliver four successful enterprise pilots, each one sharpening our understanding of the market and validating both the product and the business model. In 2021, Zeitworks was acquired by Augment.io.
My role
I came into Zeitworks with a specific conviction: that the design of the product would determine whether the technology was trusted, adopted, and sustained at scale — or whether it became another enterprise tool that IT bought and employees ignored.
That conviction shaped everything I did.
On the product side, I defined and drove the vision, roadmap, and detailed requirements from the ground up and kept the team focused on the priorities that mattered most to the customers we were trying to win. I designed the core workflows, user personas, admin portal, customer portal, and process editor — translating complex machine learning outputs into experiences that non-technical users could understand, trust, and act on. I conducted the competitive and market research that shaped our product positioning and helped us find the specific wedge where Zeitworks could win.
On the business side, I co-led the fundraising effort that secured a $4M seed round and played a central role in defining the business model, pricing strategy, and go-to-market approach. I co-led the recruitment of our CEO, CTO, and engineering team and managed our remote development team in India through the company's first six months, when the distance between vision and execution was at its greatest and the margin for misalignment was at its smallest.
I also drove the business development efforts and customer engagement that produced our pilot programs — which meant being in the room with enterprise clients, understanding their problems at a level of specificity that informed the product, and making commitments about what Zeitworks could deliver that the team then had to honor.
The through-line across Atlas, Xinova, and Zeitworks is the same thesis applied to progressively larger and more complex problems: that AI systems earn their value not through the sophistication of their models but through the trust of the people who use them. That trust is a design problem. And design is where I have always done my most consequential work.
Zeitworks was acquired in 2021. The acquisition validated the product. The pilots validated the market. The team validated the approach.
The bet was right.
D E E P D I V E
Designing for Trust: The Opt-In Model
At Atlas, we used an opt-out model. In a consumer context where the data stayed private and the value was entirely personal, that made sense. Zeitworks was a different problem entirely.
The data we were collecting wasn't staying with the individual worker. It was being aggregated into process models that managers and executives would use to make decisions about improving workflows, where to invest in automations, and even their organizational design. Out challenge was how to allow employees’ to trust Zeitworks. Workers who felt enrolled without genuine consent or that they were being spied on, would behave differently when they knew the system was watching, and the ground truth we were trying to build would be compromised before we had collected a single data point.
So we flipped the model. Every worker chose which applications would contribute to the data collection. Nothing was included by default.
Before any rollout, we held structured meetings with employee representatives from across the organization — people selected to reflect the range of roles and concerns present in the workforce, not just the ones leadership thought were relevant. We explained exactly how Zeitworks gathered data. More importantly, these individuals told us which processes they already knew were broken, which ones they thought could benefit from automation, and which ones they were skeptical about.
Those conversations gave workers a stake in the outcome. When the sensors went live, workers weren't watching the system with suspicion. They were watching it with the specific expectation that it would surface what they had already told us was worth finding.
The opt-in model cost us some data breadth. But the data we collected was cleaner, the models were more trustworthy, and adoption was faster and more sustained than any opt-out approach would have produced.
In enterprise AI, trust is not a feature you add after the architecture is decided. It has to be architected into the product from the beginning. Getting it wrong at the foundation means building everything else on sand.
