AI Principles

Last updated: - Effective date: -


Background and intent

When people share information with one another, an NDA is one way to protect it. That kind of legal protection is not practical with AI, so the closest equivalent today is a service that promises not to use content for training. This page states how Atalie approaches that asymmetry.

Basic stance

Even when AI is used, the final judgment and responsibility always remain with a human. AI may not silently replace sensitive decisions.

What is allowed

Content

  • Public pages are generally allowed to be used for search and training

  • Some content, such as material that includes third-party copyrighted work, is explicitly excluded and the reason is explained

Code

  • All code published on GitHub or elsewhere is allowed for AI use

Allowed uses and limits

Uses that are allowed

  • Drafting and restructuring text with human review in mind

  • Translation support and idea expansion for public text

  • Coding support, experimentation, and repeated tasks

  • Search and information-organizing support

Conditionally allowed

  • Translation of private text - used only when the model is not trained on it and after removing anything that looks like personal information

Areas that require human judgment

  • Legal or security-sensitive decision making

  • Public statements that affect trust

Prohibited

  • Entering personal information - personal information is never entered into AI systems

Publication standard

AI output is never published without human review. Even when AI helped create something, Atalie remains responsible for the final public result.