Right now, workplace generative AI use is largely concentrated among people using AI tools to boost their individual productivity. They might use it for coding projects, marketing plans, and research reports, for example, creating documents and presentations more quickly than ever, while also ushering in the era of AI “work slop.” That’s going to change, said Microsoft chief scientist Jaime Teevan at Tuesday’s Charter Workplace Summit.
“We’re in the infancy of the changes that are yet to come,” she said. “What it means to use AI in a collaborative context is ahead of us.”
At Tuesday’s event, Charter editor-in-chief Kevin Delaney asked Teevan how AI use cases will have to change for collaboration — and how AI will change the way humans collaborate and increase the scale at which they do so. Here are excerpts from their conversation, edited for length and clarity:
If I use an AI tool today, I sit at my computer, and it’s a pretty individual thing. It actually doesn’t reflect the collaborative nature of work. Is that going to change and if so, how quickly?
It has to change. I mean, all work is collaborative. That is everything we do. Even when you’re working alone to write a document, that is to communicate something to somebody. All work is collaborative. So it has to change. Actually, when we study how people are using [Microsoft] Copilot, you see pretty big changes to individual work. It’s also the easiest to measure early on. We see people, for example, with Copilot are creating 10% more documents or they’re reading 11% fewer emails. But the shift with AI is not just about us writing more emails for other people to use AI to summarize, there’s a real fundamental shift.
Actually, I don’t know if anybody read the HBR article by Jeff Hancock and others.This article is about AI slop, which is a super real problem. I use Copilot. I’ll be thinking through something and I’ll say, ‘Take all of the documents that I’ve written about the future of work and pull them together and write something for me,’ and I won’t have time to review it. Even not sharing that with other people, I’ll just go save a document with whatever the AI produced, and then all of a sudden that becomes part of my grounding materials. It becomes truth in my corpus to reuse in future interactions. And that’s sloppy. The whole concept behind work slop is essentially why using AI for individual productivity in a sloppy manner creates mess for other people because we aren’t centering the fact that work is collaborative. It is essentially transferring the work from the creator to the receiver.
How do you fix that?
There’s all sorts of ways, and that’s why we’re doing research in that space. One of the real ones is thinking about knowledge. We’re very artifact focused. How many beautiful documents and presentations do I create to share? We need to increasingly be knowledge focused. You saw this show up a lot early on actually with programming work where you’re like, ‘Let’s measure programmer output by the number of lines of code that they write.’ That’s a ridiculous thing to do because then you get programmers writing lots of code that’s not that useful.
And you have consultants and others who are making endless PowerPoint presentations…
And really we need to center on the knowledge. This is related to a conversation we had about conversations. Essentially, interesting work in the future is about generating knowledge, so that’s about having fun conversations. It’s about attending events like this. It’s about reading random things or drawing unexpected connections. That’s the future of work.
Are we going to be prompting AI tools simultaneously more in the way that we might work on a document simultaneously?
There’s two pieces. One is that the AI model is going to change, and then the other is actually the model with which we operate and work with the AI is going to change. Right now, AI models are trained primarily for individual interaction. You’ve probably heard of this concept of instruction tuning. You’re training a model to take an input from an individual and come up with the best response for that. That means models work well in answering your questions and don’t necessarily work well right now in group contexts. There’s some collaborative data in the training sets used for most foundation models, but it’ll be things like congressional transcripts. Congressional transcripts don’t necessarily sound like a work meeting or something you’re doing with someone. So there’s a lot of work to be done to actually think about what it means to create a model that works well in a collaborative context.
So that’s the backend foundational stuff…
And then there’s the side of what it means as humans for us. You didn’t know how to do this five years ago, but now you know how to sit down, ground the model, and come up with a meta prompt. And you have these concepts. We are likewise going to come up with the concepts for working with groups. And largely what’s interesting is this is not a technology problem. This is a human problem, a leadership problem, a business problem. What does it mean to work together in a way that provides the right information to the model and evaluates the content that’s coming out? Essentially, it’s metacognitive work. We have to uplevel the way that we’re thinking about things.
I talked about how models are instruction tuned and designed to respond to an instruction. The work of a group working together becomes how do we define our goal and what we want to discover? And then it becomes, ‘How do we plan and think about that? How do we provide judgment and discern what is relevant and take that and implement it?’ Those are really cool, inherently deeply collaborative activities that we’re doing by ourselves.
One of the ways that we’re seeing people use [Microsoft] Teams Copilot and Teams is that people are intentionally having meetings rather than saying, ‘Okay, we have a document to write. You go write the introduction and I’ll write the related work.’ What we’re seeing is actually people get on calls and have a conversation and use that to produce. Then it becomes really exciting to start thinking about how do you help people have better conversations? How do you identify the knowledge gap? How do you support better grounding? How do you help a person get up to speed? How do you close gaps in understanding?
We’ll soon release Charter Workplace Summit session recordings and a playbook with highlights, quotes, and additional reading for each session. Read some of our initial takeaways here.
Read our Q&A with Teevan from our “People to Watch in AI & Work” series here.