This policy defines the acceptable and unacceptable use of AI for Batvoice staff.
AI is incapable of aligning itself with our values. It cannot be held accountable, so it cannot embody ownership. It is incapable of innovation, since it merely predicts the outcome of following what is established. Its output is careless, and can have no regard for others. Consequently, any usage of AI must limit its impact on our work, as specified in the rest of this policy.
On the other hand, AI remains a major part of our core business. It’s the best tool to date for many use-cases, such as speech recognition, embedding generation, and more. It’s important that we be able to stay in touch with the progress that these legitimate uses are making, and that we can recognize generated output when it occurs. Consequently, a blanket-ban would not do us all a service either.
This policy attempts to balance these factors, explicitly disallowing certain uses and explicitly allowing others. In any cases that aren’t covered, uses are encouraged to make the same considerations in order to make informed choices.
Definitions
- AI (Artificial Intelligence) is any artificial system that makes decisions on the basis of inputs. This encompasses much more than merely what we commonly understand to be AI, including logic languages like Prolog. As such, we refine this definition by adding the requirement that the decision must be the result of emergence, rather than a human-defined rule that is directly matched.
- Emergence is a phenomenon under which a set of parts exhibits a higher order set of properties or behaviors that have no direct causal link to any of its parts. This is a complex subject, so for more definitions or details, please refer to the CNRTL and Samuel Alexander’s “Space, Time and Deity” volumes one and two.
- Sensitive Company Data is any data that belongs to the company, such that it being public would be damaging to the company or any of its staff.
Company Data
Sensitive company data that would not otherwise be available to a remote service must not be sent to such a remote service in general, but especially not in the context of AI usage. Here are some example uses that this affects:
- Using the summarization features in Notion on company documents already hosted on Notion is acceptable, because Notion already has access to said documents.
- Importing documents into Notion purely for the sake of using the summarization feature, if the items otherwise have no reason to be in Notion, is not acceptable.
- Using an external AI scheduling tool to help schedule tasks is acceptable, under the condition that the tasks (as sent to the remote tool) do not contain sensitive data.
- Using a remote AI code completion system or patch generator is not acceptable, because our sources are considered sensitive data, and the remote service would not have access to them otherwise.
- This provision does not apply to Github Copilot because Github already has access to our sources, but its use remains unacceptable due to other provisions in the policy.
Note that using an arbitrary model locally (i.e. not calling out to an external service) or on company hardware intended for this purpose (i.e. calling out to an internal service) is not affected by this provision, since it does not send company data, though such uses remain subject to the rest of the policy.
Decisions
AI must not be used to make decisions, as we are ultimately responsible for them. AI should be avoided in the process of making decisions, because most serious problems are problems of misconception, and it is impossible to avoid those when using AI. Here are some example affected uses:
- Filtering applicants using an AI tool is unacceptable, as the AI effectively makes a decision.
- Using an AI tool to rephrase or spellcheck an email or document to match a certain style is only acceptable if it is passed over again by a human, as otherwise the AI would effectively make a decision.