On this page
Settings
This page documents every field available under the Repository CR settings block. Use this reference when you need to configure authorization policies, provider-specific behavior, or provenance settings at the repository level.
Settings fields
Controls where Pipelines-as-Code fetches PipelineRun definitions from. Options:
source- Fetch definitions from the event source branch/SHA (default)default_branch- Fetch definitions from the repository default branch
settings:
pipelinerun_provenance: "source"Lists additional repositories that Pipelines-as-Code includes in the GitHub App token scope. Use this when your PipelineRuns need to access other repositories on the same GitHub App installation, such as shared libraries or common task repositories.
settings:
github_app_token_scope_repos:
- "organization/shared-library"
- "organization/common-tasks"Defines authorization policies for the repository. These policies control which users can trigger PipelineRuns under different conditions.
Show Policy Fields
Lists the team names whose members can trigger PipelineRuns on pull requests from external contributors by commenting /ok-to-test. These are typically maintainers or trusted contributors who can authorize CI for external contributions.
settings:
policy:
ok_to_test:
- "maintainer-username"
- "trusted-contributor"Lists the team names whose members can trigger PipelineRuns on their own pull requests, even if they would not normally have permission. Use this to grant specific external contributors the ability to run CI.
settings:
policy:
pull_request:
- "external-contributor"
- "community-member"settings:
policy:
ok_to_test:
- "team-lead"
- "senior-dev"
pull_request:
- "trusted-external"Provider-specific settings
GitHub settings
Configures GitHub-specific behavior for repositories hosted on GitHub.
Show GitHub Settings Fields
Controls how Pipelines-as-Code posts comments on GitHub pull requests. Options:
""(empty) - Default behavior (create new comments)disable_all- Disables all comments on pull requestsupdate- Updates a single comment per PipelineRun on every trigger
settings:
github:
comment_strategy: "update"GitLab settings
Configures GitLab-specific behavior for repositories hosted on GitLab.
Show GitLab Settings Fields
Controls how Pipelines-as-Code posts comments on GitLab merge requests. Options:
""(empty) - Default behavior (create new comments)disable_all- Disables all comments on merge requestsupdate- Updates a single comment per PipelineRun on every trigger
settings:
gitlab:
comment_strategy: "update"Forgejo/Gitea settings
Configures Forgejo- and Gitea-specific behavior for repositories hosted on Forgejo or Gitea.
Show Forgejo Settings Fields
Sets the User-Agent header on API requests to the Gitea/Forgejo instance. This is useful when the instance is behind an AI scraping protection proxy (e.g., Anubis) that blocks requests without a recognized User-Agent. Defaults to pipelines-as-code/<version> when left empty.
settings:
forgejo:
user_agent: "MyCustomBot/1.0"Controls how Pipelines-as-Code posts comments on Forgejo/Gitea pull requests. Options:
""(empty) - Default behavior (create new comments)disable_all- Disables all comments on pull requestsupdate- Updates a single comment per PipelineRun on every trigger
settings:
forgejo:
comment_strategy: "update"AI analysis settings
Configures AI/LLM-powered analysis of pipeline failures and pull request content.
Show AIAnalysisConfig Fields
Enables or disables AI analysis for this repository.
settings:
ai:
enabled: trueSets the LLM provider for analysis. Supported values:
openai- OpenAI (GPT models)gemini- Google Gemini
settings:
ai:
provider: "openai"Overrides the default API endpoint for the LLM provider. Defaults:
- OpenAI:
https://api.openai.com/v1 - Gemini:
https://generativelanguage.googleapis.com/v1beta
Set this when you use self-hosted LLM instances, proxy services, or alternative endpoints.
settings:
ai:
api_url: "https://custom-llm.example.com/v1"References the Kubernetes Secret containing the LLM provider API token.
settings:
ai:
secret_ref:
name: openai-token
key: api-keySets the maximum time in seconds to wait for LLM analysis (default: 30). Valid range: 1-300.
settings:
ai:
timeout_seconds: 60Sets the maximum response length from the LLM (default: 1000). Valid range: 1-4000 tokens.
settings:
ai:
max_tokens: 2000Defines the analysis scenarios and their configurations. You must specify at least one role.
Show AnalysisRole Fields
Specifies the LLM model for this role. If omitted, Pipelines-as-Code uses provider-specific defaults:
- OpenAI:
gpt-5-mini - Gemini:
gemini-2.5-flash-lite
pr-comment is supported (default).Configures what context data Pipelines-as-Code includes in the analysis request.
Show ContextConfig Fields
Configures whether Pipelines-as-Code includes container/task logs in the analysis context.
settings:
ai:
roles:
- name: "failure-analysis"
prompt: "Analyze the following CI/CD failure and suggest fixes"
model: "gpt-4"
on_cel: "event_type == 'pull_request' && status == 'failed'"
context_items:
commit_content: true
error_content: true
container_logs:
enabled: true
max_lines: 100Complete example
apiVersion: pipelinesascode.tekton.dev/v1alpha1
kind: Repository
metadata:
name: example-repo
namespace: pipelines-as-code
spec:
url: "https://github.com/organization/repository"
settings:
# Provenance configuration
pipelinerun_provenance: "source"
# GitHub App token scoping
github_app_token_scope_repos:
- "organization/shared-tasks"
- "organization/common-library"
# Authorization policies
policy:
ok_to_test:
- "team-lead"
- "senior-engineer"
- "trusted-maintainer"
pull_request:
- "approved-contributor"
# GitHub-specific settings
github:
comment_strategy: "update"
# AI analysis configuration
ai:
enabled: true
provider: "openai"
api_url: "https://api.openai.com/v1"
secret_ref:
name: openai-credentials
key: api-key
timeout_seconds: 45
max_tokens: 1500
roles:
- name: "pr-failure-analysis"
prompt: |
You are a CI/CD expert. Analyze the following pipeline failure and provide:
1. Root cause analysis
2. Specific fix recommendations
3. Prevention strategies
model: "gpt-4"
on_cel: 'event_type == "pull_request" && status == "failed"'
output: "pr-comment"
context_items:
commit_content: true
pr_content: true
error_content: true
container_logs:
enabled: true
max_lines: 100
- name: "security-review"
prompt: "Review this change for potential security issues"
model: "gpt-4"
on_cel: 'event_type == "pull_request" && has_label("security-review")'
context_items:
commit_content: true
pr_content: trueSettings inheritance
You can define settings at the global level (in the ConfigMap) or the repository level (in the Repository CR). When both exist, repository-level settings override global settings.
Related resources
- Repository Spec – Complete Repository specification
- ConfigMap Reference – Global configuration settings