On March 31, 2026, Microsoft made Copilot Cowork available through its Frontier program. The release marks a genuine shift in how Microsoft positions Copilot : no longer a conversational assistant that drafts emails and summarizes documents, but an execution agent that plans, acts, and delivers work across the entire Microsoft 365 environment. For CIOs and IT administrators, that shift raises concrete questions about technical requirements, how to enable it, and what it means for data governance in your tenant.
This article covers what Copilot Cowork actually is, how it works in practice, and what your organization needs to address before rolling it out at scale.
What this article covers:
Copilot Cowork is an execution agent built into Microsoft 365 Copilot, available since late March 2026 through the Microsoft Frontier program. It is not a conversational upgrade to Copilot. It is an automation layer capable of handling long-running, multi-step tasks autonomously across multiple Microsoft 365 applications simultaneously.
The technology behind it comes from a direct partnership between Microsoft and Anthropic. Microsoft integrated the technology powering Claude Cowork, Anthropic's agentic product, into Microsoft 365. The key difference from Claude Cowork is where execution happens: Claude Cowork runs locally on the user's machine, while Copilot Cowork operates entirely in the Microsoft 365 cloud, under the tenant's identity controls, permission policies, and compliance framework.
Cowork ships with 13 built-in skills: Word, Excel, PowerPoint, PDF, Email, Scheduling, Calendar Management, Meetings, Daily Briefing, Enterprise Search, Communications, Deep Research, and Adaptive Cards. Users can also create up to 20 custom skills by placing SKILL.md files in a dedicated OneDrive folder.
The Microsoft Frontier program gives organizations early access to new Microsoft 365 AI capabilities before general availability. Copilot Cowork became available in Frontier on March 31, 2026. No general availability date has been announced.
This follows a pattern Microsoft has used for recent major Copilot releases: limited research preview, then Frontier access, then broad rollout. Organizations that enroll in Frontier now gain hands-on experience and the ability to shape their governance posture ahead of the wider release.
You start by describing what you need in plain language, up to 16,000 characters. You can also attach files directly in the chat. From there, Cowork generates a step-by-step plan, executes it in the background, and streams real-time progress so you can follow along.
The intelligence layer behind this is called Work IQ. It pulls signals from across your organization: Outlook emails, Teams conversations, SharePoint and OneDrive files, Excel workbooks, calendars, and meeting transcripts. Cowork works from the full context of how you and your organization operate, not just a snapshot of one data source.
Here is a concrete example documented by Microsoft: a monthly budget review. You describe the goal. Cowork pulls data from the relevant spreadsheets, reviews related email threads, coordinates information across sources, and delivers a finished report. The whole process can take anywhere from a few minutes to a few hours, depending on task complexity.
Throughout the process, you stay in control. Before any sensitive action, such as sending an email, posting to a Teams channel, or creating a calendar event, Cowork displays an explicit approval prompt with a preview of what it plans to do and a risk level indicator (medium or high). You can pause, resume, or cancel at any time. If you lose your connection, Cowork picks up automatically from where it left off.
Files produced by Cowork are saved directly to OneDrive and SharePoint. They inherit the tenant's existing confidentiality and sharing policies and are immediately available within the Microsoft 365 ecosystem.
This is the dimension Microsoft's marketing announcements did not emphasize, but it is the one that matters most to IT security teams.
Copilot Cowork accesses every piece of data and every service the user is already permitted to reach within their Microsoft 365 tenant. It does not create new access paths. It operates strictly on existing permissions. That means if a user holds excessive rights, inherited from stale sharing links, overly broad group memberships, or anonymous shares that were never revoked, Cowork can act on that data on their behalf, at scale and automatically.
This is not a vulnerability in Cowork. It is an amplification of existing risk. An agent that can create documents, send communications, and search across an entire tenant is only as safe as the permissions it inherits. If those permissions are not clean, the exposure is real.
This mirrors the data governance challenge organizations faced when deploying standard Microsoft 365 Copilot. The underlying principle is the same: an AI that operates with your users' rights will surface whatever those rights expose. Overpermissioned environments, a documented risk factor in environments where access reviews are infrequent, become significantly more consequential when a capable automation agent is in play.
On the compliance side, Microsoft Purview DLP policies apply to Cowork interactions, and audit logs are available through Microsoft Purview Audit at no additional cost. Both are documented in the official Cowork governance documentation.
One documented limitation worth noting for security teams: Cowork cannot read encrypted files, even when the user has access to them.
The practical takeaway is straightforward. Before deploying Cowork at scale, the state of permissions in your tenant needs to be known and controlled. An access rights and sharing audit is a prerequisite, the same one that applies to standard Microsoft 365 Copilot deployment.
Enabling Cowork requires four prerequisites:
Note: the admin account itself must also be enrolled in Frontier for Cowork to appear in Agent management in the Admin Center. This is a documented step that is easy to miss on first setup.
Once prerequisites are in place, admins manage access through Microsoft 365 Admin Center, under Copilot, then Agents. Three availability modes are supported: available to all licensed users (default), available to specific users or security groups only, or blocked for the entire organization. Admins can also pre-deploy Cowork for selected users and pin it in the Copilot rail without requiring any action from end users.
For a controlled rollout, starting with a limited pilot group is the recommended approach. It lets you validate how the agent behaves in your specific environment before broader enablement.
Cowork represents a meaningful advance in Microsoft 365 workflow automation. But it operates on a principle that organizations deploying Copilot already know: the capability of an AI agent in M365 is directly proportional to the quality of the permissions it inherits.
An agent that can create Word documents, send emails, post to Teams channels, and search across the entire tenant runs with the user's rights. If those rights include access to sensitive data the user should no longer have, the agent treats that data as fully accessible. This is not a hypothetical. It is the documented operating logic of how Cowork works.
Organizations that have already cleaned up their M365 permissions, whether through external sharing reviews, removal of anonymous links, or periodic access recertification, are in a significantly stronger position to use Cowork without unnecessary exposure.
Those that have not yet done that work have a prerequisite to address before enabling the agent for their users.