
Pragmatic Works published a tutorial video titled "Creating a Power Page with Claude - Session 4: Creating the Data Model" that focuses on building a real Dataverse backbone for a Power Pages site. In the video, presenter Brian Knight moves beyond mock data and demonstrates how to generate tables, relationships, and an ER diagram automatically using Claude Code. He walks viewers through reviewing and refining the proposed schema before any changes are applied to the environment. Consequently, the session aims to combine automation with human review to speed development without sacrificing control.
The core demo runs the /setup-datamodel command and inspects the agent's proposed tables and relationships, showing how the AI infers structure from the existing site. Then, the presenter adjusts naming conventions, such as table prefixes, and converts service request IDs to consistent auto-generated numbers with a chosen prefix. After approval, the agent creates the tables, builds relationships, and validates the deployment in Dataverse. Finally, the session packages the components into a solution, builds a quick model-driven app, and populates sample records to test end-to-end behavior.
Furthermore, the presenter highlights practical checks you should perform before allowing automated changes, including confirming required fields and verifying lookups. He emphasizes saving manifests and preserving the proposed ER diagram for later review and sample data generation. As a result, the workflow leaves an audit trail that teams can use during subsequent deployment or troubleshooting. This gives developers a clear path from site code to a functioning data model.
The process begins by scanning frontend artifacts like templates, pages, and components so the agent can infer which entities and fields the site expects. Next, the agent queries the existing Dataverse environment to find matching tables, thereby reducing duplication and preserving normalized data where appropriate. It then generates a schema proposal with table definitions, data types, and relationships, plus a visual ER diagram for clarity. Importantly, the system waits for explicit human approval before making any changes, enforcing a human-in-the-loop safeguard.
Additionally, the video shows how the tool handles dependencies and ordering when creating records and relationships, which helps prevent runtime errors during deployment. The agent supports packaging the new tables into a deployable solution and offers commands to add sample data for validation. While the automation accelerates routine tasks, it still relies on developers to make judgments about business naming, prefixing, and cardinality. Therefore, the AI reduces mechanical effort while keeping strategic decisions in human hands.
By the end of the session, the presenter has a functioning data model inside Dataverse, complete with relationships and sample records that allow quick end-to-end testing. The demonstration shows how auto-numbering conventions and consistent prefixes make data easier to manage across environments. Moreover, packaging the work into a solution prepares the model for future deployments and version control. Teams can then build simple model-driven apps to review records and confirm site behavior without modifying the live site directly.
The video also notes the requirement for a pro license of Claude, which affects cost and accessibility for some organizations. Consequently, teams must weigh the productivity gains against licensing expenses and governance policies. Early adopters who can justify the cost report substantial time savings on repetitive tasks. In contrast, smaller teams may prefer manual setup until they scale or secure budget approval.
One clear tradeoff involves speed versus control: automation can propose sensible schemas quickly, yet it may miss nuanced business rules that only domain experts know. Therefore, teams should adopt a review checklist that covers naming conventions, required fields, and data retention policies before approving changes. Another challenge is managing schema drift across environments; consequently, consistent solution packaging and versioning become essential to keep production, test, and development aligned. In short, automation reduces friction but increases the need for disciplined governance.
Security and compliance present additional concerns because automated tools interact directly with live environments, which calls for strict permission boundaries and audit trails. To manage this, teams should run proofs of concept in isolated sandboxes and enable detailed logging during initial runs. Finally, training and onboarding matter: developers and administrators must understand both the agent's outputs and the underlying Dataverse principles to make informed decisions. Thus, combining automation with clear policies and education yields the best outcomes.
Teams considering this approach should pilot the workflow on a small project and document the review steps they use before approving schema changes. Additionally, enforcing naming standards and solution packaging from the start reduces cleanup later and eases migration between environments. If licensing allows, the tool can accelerate prototyping and free developers to focus on business logic rather than boilerplate schema tasks. Ultimately, the session provides a pragmatic path from site code to a tested Dataverse model while highlighting the governance and skill-building required to scale safely.
Creating a Power Page data model, Claude AI Power Pages tutorial, Power Page data modeling session 4, Power Pages data architecture, Claude data model tutorial, Power Pages low-code data model, Building data models in Power Pages, Claude for Power Platform data modeling