The author suggests using GitHub's AI tool to check all values in the moderation results object, ensuring none of the properties have a true value. This is done to ensure harmful responses are blocked from being seen by users unless all moderation checks have returned false.
A simple code snippet that checks for true values across all properties in the object moderationResults is provided. This code uses Object.values() to create an array of all values, and every() to check if all of them are false, thus indicating the response is safe. If all values are false, the ‘isSafe’ variable becomes true.
Additional guidelines are provided for using the AI-enabled assistant, such as allowing the tool to generate and choose suitable code modifications.
However, this is just a starting point and the author mentions more work is required before this modified code can be used.
The aim is to prevent users from receiving responses that may have been negatively moderated. If the constant ‘isSafe’ equals true, users will be shown responses otherwise it's halted. In short, replies are logged to the console only after passing moderation.
These virtual assistants or 'copilots' are capable of understanding and responding according to user queries, enhancing user experience and interaction.
Modern technology allows these assistants to cross-verify information provided by users, ensuring safety.
For instance, checks are in place to ensure all responses are filtered before reaching the user, hence maintaining ethical guidelines.
The trick lies in training these AI assistants accurately, given the richness and complexity of human communication.
Platforms like OpenAI and GitHub have proven instrumental in achieving this, consequently bridging the gap between humans and technology.
In the third entry of a blog series on Microsoft's Copilot, the focus is firmly on enhancing the moderation element for user safety. Coding enthusiasts seeking to construct their personalized, command-line Copilot can study the steps outlined in this article. This document provides details on making adjustments to your code using Microsoft's Copilot, designed to safeguard the end-user from undesirable responses. This blog suggests that by modifying the code, the reader would ensure that replies are blocked from user view, unless moderation checks are passed with false results for every examination.
Before commencing, the blog urges readers to familiarize themselves with the context and instructions provided in the first two parts of the series. After this, one can proceed with the modification process outlined in the article. An invaluable first step would be introducing a check against the values in your moderation results.
The subsequent steps involve using this newly created constant in the code and ensuring none of them carry a true value. To assist in this, Microsoft's Copilot offers code suggestions, making the process easier and smoother. The result is an improved method for verifying all properties in the 'moderationResults' object.
Check: The code provided serves as a tool that verifies the correctness of the user responses. It uses the 'Object.values()' method to receive an array of all values in 'moderationResults'. Following that, 'every()' is deployed to establish that every value is false. This ascertains the safety of the user’s responses.
Thus, the reader can use the instructions in the article to enforce more stringent moderation practices, in effect shielding the users from negatively moderated factors. Finally, the blog touches on the ethical guidelines outlined by OpenAI, emphasizing safety, respect, and the promotion of positive interactions.