Microsoft’s updated policy allows storage of some Bing conversations

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Microsoft has unveiled a set of updated policies for its AI services, set to take effect on September 30th. The policies address emerging challenges in generative AI, particularly data usage and responsible development practices. However, Microsoft has stated that it does not retain conversations or utilize that data to train its AI models for the Bing Enterprise Chat mode. This service will be available to Microsoft 365 subscribers from next month.

The five key policy points cover a range of areas. Users are prohibited from attempting to reverse engineer AI models, preventing the discovery of underlying components. Data extraction via methods like web scraping is disallowed unless explicitly permitted. A significant clause restricts users from employing AI data to create or enhance other AI services.

Emphasizing the fourth point, which mentions

  • iv. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.

The tech giant highlights its commitment to safeguarding user inputs and generated outputs. Microsoft will now process and store user inputs and outputs generated by the AI models to prevent abusive or harmful uses of its AI services. This data management approach aligns with the growing concerns regarding data privacy and responsible AI development.

Here is the list of all the points:

  1. Reverse Engineering: Do not try to discover or remove the components of Microsoft’s AI models, algorithms, and systems.
  2. Extracting Data: Web scraping, harvesting, and data extraction cannot retrieve data from Microsoft’s AI services without permission.
  3. Limits on Data Usage: Microsoft’s AI services cannot be used to create or improve other AI services.
  4. Use of User Content: Microsoft stores and processes user inputs and AI-generated outputs to prevent abusive or harmful use.
  5. Third-Party Claims: Users are responsible for handling third-party claims related to using Microsoft’s AI services, such as copyright infringement claims from generated content.

Screenshot 2023 08 15 At 9.42.12 Pm

These policies follow similar moves by other tech giants, including Meta, Google, and OpenAI. The alterations reflect the increasing importance of ethical data handling and AI development, particularly with the surge in AI-generated content across the web. 

Microsoft’s data management strategy addresses user concerns in a landscape where AI is becoming increasingly integrated into daily life. It sets a precedent for responsible AI service usage across the industry.

via TheRegister