Interacting With CI Image Registries

How to interact with the CI image registries, set up service account access and interact with images for a specific job.

Summary of Available Registries

The OpenShift CI system runs on OpenShift clusters; each cluster hosts its own image registry. Therefore, a number of image registries exist in the OpenShift CI ecosystem. The following table shows the public DNS of each registry and has some comments on their purpose:

ClusterRegistry URLNote
app.ciregistry.ci.openshift.orgthe authoritative, central CI registry
arm01registry.arm-build01.arm-build.devcluster.openshift.comcontains up-to-date image copies from the authoritative registry for jobs that run on this build farm only; only open to arm admins
build01registry.build01.ci.openshift.orgcontains up-to-date image copies from the authoritative registry for jobs that run on this build farm only
build02registry.build02.ci.openshift.orgcontains up-to-date image copies from the authoritative registry for jobs that run on this build farm only
vsphereregistry.apps.build01-us-west-2.vmc.ci.openshift.orgcontains up-to-date image copies from the authoritative registry for jobs that run on this build farm only; only open to vsphere admins

Container Image Data Flows

Today, three major data flows exist for container images in the OpenShift CI ecosystem. First, when a job executes on one of the build farm clusters, container images that need to be built for the execution will exist only on that cluster. Second, when changes are merged to repositories under test, updated images are built on a build farm and promoted to the central, authoritative registry. Users should always pull from this registry for any images they interact with. Third, when an image changes on the authoritative registry, that change is propagated to all build farm clusters so that the copies they hold are up-to-date and jobs that run there run with the correct container image versions.

Common Questions

How do I log in to pull images that require authentication?

All registries are the internal OpenShift image registry for the cluster they reside on, so authenticating to the registry requires authentication to the cluster that hosts it. Once logged in to the OpenShift cluster, the oc CLI can be used to authenticate to the registry in question. The primary example here is the central CI registry registry.ci.openshift.org which is backed by the app.cicluster.

Using the the list of clusters, navigate to the console URL. After logging in to this cluster using the console, use the link in the top right “Copy login command” to authenticate your local oc CLI. Then you can run oc registry login to authenticate to the registry.

1
2
3
4
5
6
7
8
9
$ oc registry login
info: Using registry public hostname registry.ci.openshift.org
Saved credentials for registry.ci.openshift.org

$ cat ~/.docker/config.json | jq '.auths["registry.ci.openshift.org"]'
{
  "auth": "token"
}

How do I get a token for programmatic access to the central CI registry?

If you’re developing an integration with the central CI registry, an OpenShift ServiceAccount should be used. Write a pull request to the openshift/release repository that adds a new directory under the release/clusters/app.ci/registry-access directory. In this directory, provide an OWNERS file to allow your team to make changes to your manifests and an admin_manifest.yaml file that creates your ServiceAccount and associated RBAC:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# this is the Namespace in which your ServiceAccount will live
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: Automation ServiceAccounts for MyProject
    openshift.io/display-name: MyProject CI
  name: my-project
---
# this is the ServiceAccount whose credentials you will use
kind: ServiceAccount
apiVersion: v1
metadata:
  name: image-puller
  namespace: my-project
---
# this grants your ServiceAccount rights to pull images
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-project-image-puller-binding
  # the namespace from which you will pull images
  namespace: ocp
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: system:image-puller
subjects:
  - kind: ServiceAccount
    namespace: my-project
    name: image-puller
---
# the Group of people who should be able to manage this ServiceAccount
kind: Group
apiVersion: v1
metadata:
  name: my-project-admins
users:
  # these names are GitHub usernames
  - bob
  - tracy
  - jim
  - emily
---
# this adds the admins to the project.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-project-viewer-binding
  namespace: my-project
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: view
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: my-project-admins
    namespace: my-project
---
# this grants the right to read the ServiceAccount's credentials and pull
# images to the admins.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-project-admins-binding
  namespace: my-project
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: pull-secret-namespace-manager
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: my-project-admins
    namespace: my-project

After the pull request is merged, your manifests will be automatically applied to the cluster hosting the central CI registry. Make sure your oc CLI is logged in to the app.ci cluster via the console, then you will be able to generate the pull credentials for your ServiceAccount using the oc CLI:

1
$ oc --namespace my-project registry login --service-account image-puller --registry-config=/tmp/config.json

The created /tmp/config.json file can be then used as a standard .docker/config.json authentication file.

How can I access images that were built during a specific job execution?

Namespaces in which jobs execute on build farms are ephemeral and will be garbage-collected an hour after a job finishes executing, so access to images used in a specific job execution will only be possible shortly after the job executed.

In order to access these images, first determine the build farm on which the job executed by looking for a log line in the test output like:

2020/11/20 14:12:28 Using namespace https://console.build02.ci.openshift.org/k8s/cluster/projects/ci-op-2c2tvgti

This line determines the build farm that executed the tests and the namespace on that cluster in which the execution occurred. In this example, the job executed on the build02 farm and used the ci-op-2c2tvgti namespace. All registry pullspecs are in the form <registry>/<namespace>/<imagestream>:<tag>, so if we needed to access the source image for this execution, the pullspec would be registry.build02.ci.openshift.org/ci-op-2c2tvgti/pipeline:src. In order to pull an image from a test namespace, you must be logged in to the registry and be the author of the pull request. Pull the image with any normal container engine:

1
$ podman pull registry.build02.ci.openshift.org/ci-op-2c2tvgti/pipeline:src

Warning: Only vSphere system administrators can access the images on registry.apps.build01-us-west-2.vmc.ci.openshift.org.

How do I access the latest published images for my component?

If the ci-operator configuration for your component configures image promotion, output container images will be published to the central CI registry when changes are merged to your repository. Two main configurations are possible for promotion: configuring an ImageStream name and namespace or a namespace and a target tag.

Granting Pull Privileges

In both cases, it will be necessary to declare who should be able to pull the selected images from the registry. By default, no pulling privileges are provided to any new image. You may choose to simply allow any authenticated user to pull images from the target Namespace or to list specific users who may pull images. In both cases, write a pull request to the openshift/release repository that adds a new directory under the release/clusters/app.ci/registry-access directory. In this directory, provide an OWNERS file to allow your team to make changes to your manifests and an admin_manifest.yaml file that implements your pull policy. Here’s an example of allowing all authenticated users to pull:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# this is the Namespace in which your images live
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: Published Images for MyProject
    openshift.io/display-name: MyProject CI
  name: my-project
---
# this grants all authenticated users rights to pull images
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-project-image-puller-binding
  namespace: my-project
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: system:image-puller
subjects:
# this is the set of all authenticated users
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:authenticated

The following example explicitly lists a set of users who may pull images:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# this is the Namespace in which your images live
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: Published Images for MyProject
    openshift.io/display-name: MyProject CI
  name: my-project
---
# the Group of people who should be able to pull images
kind: Group
apiVersion: v1
metadata:
  name: my-project-image-pullers
users:
  # these names are GitHub usernames
  - bob
  - tracy
  - jim
  - emily
---
# this grants the right to pull images to the group
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-project-image-pullers-binding
  namespace: my-project
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: system:image-puller
subjects:
- kind: Group
  apiGroup: rbac.authorization.k8s.io
  name: my-project-image-pullers
  namespace: my-project

Publication of New Tags

A configuration that specifies the ImageStream name looks like the following and results in new tags on that stream for each image that is promoted:

1
2
3
4
5
6
7
images:
- dockerfile_path: images/my-component
  from: base
  to: my-component
promotion:
  name: "4.7"
  namespace: ocp

The my-component image can be pulled from the authoritative registry with:

1
$ podman pull registry.ci.openshift.org/ocp/4.7:my-component

Publication of New Streams

A configuration that specifies the ImageStream tag looks like the following and results in new streams in the namespace for each image that is promoted, with the named tag in each stream:

1
2
3
4
5
6
7
images:
- dockerfile_path: images/my-component
  from: base
  to: my-component
promotion:
  namespace: my-organization
  tag: latest

The my-component image can be pulled from the authoritative registry with:

1
$ podman pull registry.ci.openshift.org/my-organization/my-component:latest

Why I am getting an authentication error?

An authentication error may occur both in the case where you have not yet logged in to a registry and in the case where you logged in to the registry in the past.

I have not yet logged in to the registry.

Please follow the directions to log in to the registry.

I have logged in to the registry in the past.

An unfortunate side-effect of the architecture for container image registry authentication results in authentication errors when your authentication token expired, even if the image you are attempting to pull requires no authentication. Authentication tokens expire once a month. All you’ll need to do is follow the directions to log in to the registry again.

Last modified April 21, 2021: Add arm01 to the doc (fc8fc71)