
A Microsoft MVP 𝗁𝖾𝗅𝗉𝗂𝗇𝗀 develop careers, scale and 𝗀𝗋𝗈𝗐 businesses 𝖻𝗒 𝖾𝗆𝗉𝗈𝗐𝖾𝗋𝗂𝗇𝗀 everyone 𝗍𝗈 𝖺𝖼𝗁𝗂𝖾𝗏𝖾 𝗆𝗈𝗋𝖾 𝗐𝗂𝗍𝗁 𝖬𝗂𝖼𝗋𝗈𝗌𝗈𝖿𝗍 𝟥𝟨𝟧
In a recent YouTube walkthrough, Daniel Anderson [MVP] demonstrates how AI in SharePoint now accepts natural language and voice to create document libraries and metadata. The short demo shows him building an Agreements library by speaking commands that add date columns, choice fields, and custom metadata, which he then tests by uploading documents. Importantly, Anderson frames the feature as part of the public preview for Wave 3 of Microsoft 365 Copilot, and he highlights licensing and backend requirements that administrators should note. As a result, the video serves as both a practical guide and an early look at how voice input will change content management workflows.
First, Anderson activates the voice interface and instructs the system to create a library named Agreements, and Copilot translates these instructions into a functioning SharePoint library. Then, he adds columns using plain speech: a date column, single-choice options, and a multi-select metadata field, all without typing. Next, he uploads sample documents to verify that the new columns populate correctly and that metadata prompts appear as expected during the upload process. Finally, he shows Copilot checking for existing libraries before creating new ones, which prevents duplicates and demonstrates a level of context awareness.
The video notes specific prerequisites: a valid Microsoft 365 Copilot license and Anthropic enabled as a sub-processor, which together power the natural language and voice features. In addition, Anderson mentions tenant admin steps tied to the modern experience preview, and he points out that property bag updates can happen without enabling custom scripts thanks to new settings. Consequently, IT teams must plan for licensing, compliance, and tenant configuration before rolling this out broadly. Therefore, administrators should test in controlled environments prior to wider deployment.
Voice-driven library creation promises clear productivity gains for content creators and mobile users, because it reduces typing and speeds routine tasks like adding metadata fields. Moreover, the feature improves accessibility, enabling users who prefer or require speech to work effectively in SharePoint, and it integrates with corporate communication tools like Viva Amplify for broader outreach. Additionally, the ability to ask Copilot to check for existing libraries reduces clutter and overlaps across sites, which helps maintain cleaner information architecture. Overall, teams that need rapid, consistent document structure will likely find immediate value.
However, the convenience of voice commands brings tradeoffs around accuracy, privacy, and governance that organizations cannot ignore. For instance, voice recognition can struggle in noisy environments or with specialized vocabulary, which may lead to incorrect column types or metadata that then require manual correction. Furthermore, reliance on third-party sub-processors like Anthropic raises compliance questions for regulated sectors, and the licensing costs of Microsoft 365 Copilot may limit adoption for smaller teams. Consequently, IT leaders must balance productivity gains against these operational and legal considerations.
Integration with existing SharePoint customizations and tenant policies can create friction during adoption, because legacy sites and classic pages might not immediately support the modern voice-driven workflow. In addition, automated metadata creation depends on accurate intent detection; when intent parsing fails, content discoverability and downstream automation can suffer. Therefore, teams should plan staged rollouts, include fallback manual controls, and train content owners to validate AI-generated settings. In this way, organizations can reduce risk while still benefitting from the new capabilities.
To move forward, Anderson’s demo suggests a practical path: pilot the feature with a small set of libraries, evaluate metadata accuracy, and measure time saved versus manual setup. Next, engage governance teams early to map compliance requirements and confirm that any required sub-processor arrangements meet policy needs. Finally, communicate clear guidelines to content creators and provide quick guides for when to use voice versus manual editing, so teams retain control while exploring faster workflows. By following these steps, organizations can balance innovation with stability.
In short, the video by Daniel Anderson [MVP] offers a hands-on preview of a meaningful shift toward conversational SharePoint management, and it shows how natural language and voice can speed library and metadata creation. At the same time, the demo highlights that practical deployment requires attention to licensing, privacy, and tenant configuration, as well as plans for error handling and governance. Therefore, while the feature promises improved productivity and accessibility, successful adoption will depend on careful pilot testing and collaboration between IT, compliance, and content teams. Overall, the walkthrough provides a useful, balanced look at what organizations can expect from this early stage of the voice command upgrade.
SharePoint voice commands, SharePoint voice integration, SharePoint voice search, SharePoint speech recognition, Microsoft 365 voice commands, SharePoint voice navigation, SharePoint voice assistant, SharePoint voice control features