Security Zones in OpenShift worker nodes — Part IV — User Restrictions and Recap

Luis Javier Arizmendi Alonso
ITNEXT
Published in
18 min readJul 21, 2020

--

This is the final post about how to configure Security Zones in your OpenShift workers. You should already complete the previous configuration steps:

This time we will configure RBAC and other restrictions to allow only a subset of our OpenShift cluster users being able to use the Secure Zone.

Overview

Now everything is working. We could say that we have done, but only if we fully trust our OpenShift users, since any user can choose when to deploy workloads in the Secure Zone.

We don’t want that. We, as cluster administrators, want to setup certain namespaces, attached to some users/groups, that will be allowed to create new applications in the Secure Zone, while the rest of users/namespaces won’t be able to use it. That means that the only one that will be able to create new namespaces will be the cluster administrator (We will need to disable that option for the rest of the users)

I will create two groups of users:

  • Regular users
  • Secure-zone users

Regular users will be only able to use the Regular Zone, while the Secure-zone users will be allowed to use both Zones (or just one by configuring the namespace default nodeSelector as we will see). If you want to setup the use case where you create a zone for the untrusted users, you will need to configure the permissions in the other way round, so that new group of (untrusted) users can only use the new Zone and the rest of the users can use both, and also configure the default nodeSelector in the namespace so the users don’t need to include it in the object definition to prevent the deployment from failing.

Preventing the usage of the Secure Zone

How do we “allow” someone to use the Secure Zone? by including the Tolerations on their namespace. As we won’t allow users to create the namespaces on their own, the cluster-admin will know when to include that Toleration to the namespaces and how to bind the namespaces to the group of users, but we need to prevent users to configure their own Tolerations, otherwise, they could include Tolerations to use the Secure Zone even though when they are not supposed to do such thing.

There is a special annotation (scheduler.alpha.kubernetes.io/tolerationsWhitelist) in the namespaces that list the valid Tolerations that users on that namespace can configure. If the user tries to configure any other Toleration that is not listed there he will get an error. We will use that annotation.

There is one thing that you must take into consideration. When you create a POD, even if you don’t setup any Toleration, you can find two of them configured in the object definition:

...
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
...

If we setup the tolerationsWhitelist annotation we need to bear that in mind and include those “default” Tolerations in the allowed list of Tolerations, otherwise you will find this error when trying to create new PODs because you are trying to use not allowed Tolerations (although you didn’t specify them on your object):

pod tolerations (possibly merged with namespace default tolerations) conflict with its namespace whitelist

Since we don’t want users to configure any other Toleration, we just include those default Tolerations in the annotation, not anymore, that will make impossible for them to use the required Toleration to use the Security Zone and since the cluster administrator will be the only one creating namespaces, this can be pre-configured and the users won’t be able to modify it.

Preventing NetworkPolicies modification

Remember that there are also other configurations at the namespace level that must be performed beside the Tolerations, such as the NetworkPolicies to prevent connections inside the SDN

That kind of configuration is not like the previous one, which is contained in the annotation of a namespace object that the users cannot modify, is an additional object linked to the namespace that users can manage.

But what happens if the cluster administrator setup some NetworkPolicies in the namespace and then the users just delete them afterward? We should prevent users to modify NetworkPolicies to be sure that won’t happen.

That will be done by removing the permissions on the NetworkPolicy object for the users. I will create a new RBAC role where those privileges are removed instead of removing them from the default roles, following the strategy of touching as less as possible the default OpenShift configurations that I’ve been following from the beginning.

Including default annotations and NetworkPolicies in namespaces

There are some configurations to be done every time that a namespace is created:

  1. Give access to users/groups using the required role
  2. Create the tolerationsWhitelist annotation to prevent unwanted Toleration configurations from the users
  3. Creating default NetworkPolicies

For points 2 and 3 we can perform a configuration that will make the cluster administrator life a little easier. We will include some tolerationsWhitelist and default NetworkPolicies in the default new-namespace template, so every time that the clusteradmin creates a new namespace those configs will be automatically included in the config.

Multus networks are configured in the Network Operator one by one, so we cannot include them in this template, maybe someday it will be an easier supported way to attach Multus networks to namespaces, but now that’s the way to do it.

Summary of changes

In summary, we will need to:

  • Create two new user groups and include users on them
  • Remove user permission to create their own namespaces/projects
  • Create a new cluster role where we Remove user permission to modify Tolerations and Remove user permission to modify NetworkPolicies
  • Modify the default namespace creation template to include default NetworkPolicies

OpenShift configuration (Restricting Zone usage)

Let’s begin with the new groups.

1-User groups

I have some users in my OpenShift cluster already configured (user1 to user10). I will add two new groups, one using the OC CLI and another one with a descriptor (you can use either CLI or Web console to create it).

oc adm groups new regularusers user1 user2 user3 user4 user5

The second group:

apiVersion: user.openshift.io/v1
kind: Group
metadata:
name: securezoneusers
users:
- user6
- user7
- user8
- user9
- user10

2-Remove namespaces creation permission

You can remove the project self-provisioner privileges by running this command:

$ oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'

Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state.

$ oc patch clusterrolebinding.rbac self-provisioners -p ‘{ “metadata”: { “annotations”: { “rbac.authorization.kubernetes.io/autoupdate”: “false” } } }’

3-Create a new cluster role where we remove user permission to modify NetworkPolicies

Instead of starting from scratch to create a role, I will get one of the default ones, copy it and finally modify it. In my case, the best match is the “edit” role, so I will “export” it to a file (Note that oc export command is not supported anymore, we will have to remove some keys manually)

I create an “editrestricted.yaml” file with the contents of the edit role object:

oc get clusterrole edit -o yaml > editrestricted.yaml

Once you have the file, you will need to make some changes on it:

  • Change the name of the object. I changed the name of the role to editrestricted
  • Remove not only the autoupdate key, but all metadata (less the name of the object) because otherwise, you can find that the role modifications are not applied due autoupgrades of the object. You will have something like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: editrestricted
rules:
- apiGroups:
...
  • Remove all occurrences where “networkpolicies” appears. In my case there are 5 occurrences, 4 of them are in keys were the networkpolicies is an additional resource in the list, so we just need to remove it, but there is one where networkpolicies is the only resource (check below), in that case you will need to remove the complete entry (you cannot have an empty resource list in your keys)
...
- verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
apiGroups:
- extensions
- networking.k8s.io
resources:
- networkpolicies
- verbs:
...

After all those modifications you can create the object, in this case, I will use the OC CLI:

oc create -f editrestricted.yaml

4-Modify the default namespace creation template including whitelist tolerations and default NetworkPolicies

As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.

In order to perform modifications in the default template, we need to create a new object with the configuration and then configure the projectRequestTemplate key in the Project cluster configuration object pointing to it.

We generate a file with the project template generation:

$ oc adm create-bootstrap-project-template -o yaml > projecttemplate.yaml

Then we include the configuration that we want in that file. As you can see if you open the file, the template is a list of objects that will be created when a new project is created.

There will be a “Project” object where we will include the annotation to prevent Toleration configuration from the users (remember to include here the two default Tolerations used by the PODs):

scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/not-ready"},{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/unreachable"}]'

And we also include additional NetworkPolicy objects. In our case two policies that allow intra-project POD traffic and access from the monitoring system (remember that the other NetworkPolicies must be configured after project creation depending if that project will be to create “Regular applications” or “Secure Zone applications”).

The first part of the template object (where we performed the config) looks like this:

apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
podSelector: {}
policyTypes:
- Ingress
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/not-ready"},{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/unreachable"}]'
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
...

We have our object prepared, now we need to create it under the “openshift-config” namespace:

$ oc create -f projecttemplate.yaml -n openshift-config

Once the object is created in the OpenShift Cluster, we still have to reference it in the Project cluster config:

oc edit project.config.openshift.io/cluster

We include in the spec the project request template name that we already configured:

...
spec:
projectRequestTemplate:
name: project-request

Done, now every time that you create a new project the namespace will include the annotation and two NetworkPolicies will be created.

Bear in mind that there are other configurations that are not included in the default template, for example, the label securityzone= secure and the default Tolerations that must be included only in the cases where the namespace will be allowed to use the Secure Zone.

Configuration testing (Restricting Zone usage)

As a cluster administrator, I will create two namespaces and attach them to the different user groups that we created:

  • RBAC-regular” namespace attached to “regularusers” group
  • RBAC-secure” namespace attached to “securezoneusers” group

I create the project for the regular user test:

oc new-project rbac-regular

At this moment, we can check that the tolerationsWhitelist annotation was included in the namespace:

$ oc get namespace rbac-regular -o yaml | grep tolerations    scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect":
f:scheduler.alpha.kubernetes.io/tolerationsWhitelist: {}

and that the two NetworkPolicies are associated with each project:

$ oc get networkpolicies.networking.k8s.io -n rbac-regularNAME                              POD-SELECTOR   AGE
allow-from-openshift-monitoring <none> 23s
allow-from-same-namespace <none> 23s
allow-from-openshift-ingress <none> 23s

I will also create a project to test the Secure Zone allowed users:

oc new-project rbac-secure

Since this project will be allowed to use the Secure Zone, we need to include some manual configurations to the default:

  • Change the tolerationsWhitelist annotation to allow the Toleration needed to use the Secure Zone
  • (optional) Include a defaultTolerations annotation so users don’t need to include the Toleration configuration in their kubernetes objects if they don’t want
  • (optional) Include default nodeSelector annotation so users don’t need to include the right nodeSelector in their objects. If you configure the default nodeSelector the namespace will only the group of the workers with that label, since all deployments will include such nodeSelector. If you want to let the namespace create applications in both Zones do not include it (for others use cases such as the “untrusted user zone” will be a requirement).
  • Include the label securityzone= secure in the namespace

We can include in the annotation the Tolerations that we are using, but since I don’t have more groups or Tolerations that must be restricted, what I will do for this test is just remove the tolerationsWhitelist annotation from the namespace rbac-secure object, that will enable users in that namespace to use any Toleration that they need.

We will also include by default the Toleration for the Secure Zone, so the users will only have to use the nodeSelector in their Deployments (the Toleration will be always configured even if they don’t specify it).

Check the changes in the namespace object in bold text:

apiVersion: v1
kind: Namespace
metadata:
labels:
securityzone: secure

annotations:
openshift.io/description: ""
openshift.io/display-name: ""
openshift.io/requester: system:admin
openshift.io/sa.scc.mcs: s0:c27,c9
openshift.io/sa.scc.supplemental-groups: 1000720000/10000
openshift.io/sa.scc.uid-range: 1000720000/10000
openshift.io/node-selector: node-role.kubernetes.io/secure-worker=
scheduler.alpha.kubernetes.io/defaultTolerations: '[{"Key": "securityzone", "Operator":"Equal", "Value": "secure", "effect": "NoSchedule"}]'

creationTimestamp: "2020-07-21T07:18:06Z"
managedFields:
...

Once we have the project/namespaces we, as clusteradmin, can assign the permissions to the user groups that we already created, giving them the role editrestricted that we generated during the configuration steps:

$  oc adm policy add-role-to-group editrestricted regularusers -n rbac-regularclusterrole.rbac.authorization.k8s.io/editrestricted added: "regularusers"$ oc adm policy add-role-to-group editrestricted securezoneusers -n rbac-secureclusterrole.rbac.authorization.k8s.io/editrestricted added: "securezoneusers"

The binding is configured, let’s run these tests (network connection prevention between Zones has been already tested in the previous post regarding network configuration):

  • As a user I will try to create a new project/namespace and it should fail.
  • As a user I will try to remove/create Network policies and it should fail.
  • As a user in regularusers group I will create an app in the Regular Zone and it should succeed.
  • As a user in regularusers group I will create an app in the Secure Zone and it should fail.
  • As a user in securezoneusers group I will create an app in the Regular Zone and it should succeed.
  • As a user in securezoneusers group I will create an app in the Secure Zone and it should succeed.

1-As a user I will try to create a new project/namespace and it should fail.

$ oc loginAuthentication required for https://api.ocp.136.243.40.222.nip.io:6443 (openshift)
Username: user1
Password:
Login successful.
You have one project on this server: "default"Using project "default".$ oc new-project hola
Error from server (Forbidden): You may not request a new project via this API.

2-As a user I will try to remove/create Network policies and it should fail.

As user1 in project rbac-regular I try to list the NetworkPolicies:

$ oc get networkpolicyError from server (Forbidden): networkpolicies.networking.k8s.io is forbidden: User "user1" cannot list resource "networkpolicies" in API group "networking.k8s.io" in the namespace "rbac-regular"

I simulate the user knows the name of the default NetoworkPolicies and tries to remove one :

$ oc delete networkpolicies allow-from-same-namespaceError from server (Forbidden): networkpolicies.networking.k8s.io "allow-from-same-namespace" is forbidden: User "user1" cannot delete resource "networkpolicies" in API group "networking.k8s.io" in the namespace "rbac-regular"

I also try to create a new NetworkPolicy:

$ oc create -f - << EOF
> apiVersion: networking.k8s.io/v1
> kind: NetworkPolicy
> metadata:
> name: allow-from-openshift-ingress
> spec:
> ingress:
> - from:
> - namespaceSelector:
> matchLabels:
> network.openshift.io/policy-group: ingress
> podSelector: {}
> policyTypes:
> - Ingress
> EOF
Error from server (Forbidden): error when creating "STDIN": networkpolicies.networking.k8s.io is forbidden: User "user1" cannot create resource "networkpolicies" in API group "networking.k8s.io" in the namespace "rbac-regular"

3-As a user in regularusers group I will create an app in the Regular Zone and it should succeed.

As user1 in project rbac-regular I try to create this POD:

apiVersion: v1
kind: Pod
metadata:
namespace: rbac-regular
name: test-regular
labels:
app: test-regular
spec:
containers:
- name: test
image: centos/tools
command: ["/bin/bash", "-c", "sleep 9000000"]

It was allowed and the pod is running on the Regular Zone:

$ oc get pod -o wide -n rbac-regular  | grep test | awk {'print $1" " $7'} | column -ttest-regular  worker0.ocp.136.243.40.222.nip.io

4-As a user in regularusers group I will create an app in the Secure Zone and it should fail.

As user1 in project rbac-regular I try to create this POD :

apiVersion: v1
kind: Pod
metadata:
namespace: rbac-regular
name: test-secure
labels:
app: test-secure
spec:
containers:
- name: test-secure-tole
image: centos/tools
command: ["/bin/bash", "-c", "sleep 9000000"]
tolerations:
- key: "securityzone"
operator: "Equal"
value: "secure"
effect: "NoSchedule"
nodeSelector:
node-role.kubernetes.io/secure-worker: ''

It’s not allowed showing this error:

pod tolerations (possibly merged with namespace default tolerations) conflict with its namespace whitelist

5-As a user in securezoneusers group I will create an app in the Regular Zone and it should succeed.

As user 8 in project rbac-secure I try to create this POD:

apiVersion: v1
kind: Pod
metadata:
namespace: rbac-secure
name: test-regular
labels:
app: test-regular
spec:
containers:
- name: test
image: centos/tools
command: ["/bin/bash", "-c", "sleep 9000000"]

It was allowed and the pod is running on the Regular Zone:

$ oc get pod -o wide -n rbac-secure  | grep test | awk {'print $1" " $7'} | column -ttest-regular  worker0.ocp.136.243.40.222.nip.i

6-As a user in securezoneusers group I will create an app in the Secure Zone and it should succeed.

As user 8 in project rbac-secure I try to create this POD (bear in mind that I’m not including the Toleration or the nodeSelector in the POD definition since we forced to included it by default in the namespace object)

apiVersion: v1
kind: Pod
metadata:
namespace: rbac-secure
name: test-secure
labels:
app: test-secure
spec:
containers:
- name: test-secure-tole
image: centos/tools
command: ["/bin/bash", "-c", "sleep 9000000"]

It’s allowed and running in Secure Zone workers (remember that always will default to worker4 since worker2 and worker3 have a PreferNoSchedule Taint):

$ oc get pod -o wide -n rbac-secure  | grep test | awk {'print $1" " $7'} | column -ttest-regular  worker0.ocp.136.243.40.222.nip.io
test-secure worker4.ocp.136.243.40.222.nip.io

We can check that the Secure Zone Toleration is there along with the cluster default ones:

...tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
- effect: NoSchedule
key: securityzone
operator: Equal
value: secure
...

Done!, Let’s review the steps that we performed

We are done!, let’s review the steps that we followed, but first, take a look at the architecture that we configured:

Step 0. Static IPs

Configure the static IPs, including the Secure Zone access network, when you create your new nodes.

Step 1. Label and Taint all workers in the new Zone

You need to include a new label (I chose to create a new “role”) in all workers located in the Zone:

oc label node <newzoneworker> node-role.kubernetes.io/secure-worker=''

Since I created a new role, I remove the default role in those workers:

oc label node <newzoneworker> node-role.kubernetes.io/worker-

Create a Taint to prevent workloads from being scheduled into the new Zone unless you configure a specific Toleration:

oc adm taint node worker2.ocp.136.243.40.222.nip.io securityzone=secure:NoSchedule

Step 2. Label and Taint the “access” workers in the new Zone

In the workers where the IngressController will be located (the ones with access to the Secure Zone Access Network), I include an additional label and taint:

oc label node worker2.ocp.136.243.40.222.nip.io ingressaccess=true

The Taint is because we prefer not to schedule in those nodes any workload if possible:

oc adm taint node worker2.ocp.136.243.40.222.nip.io ingressaccess=true:PreferNoSchedule

Step 3. Create a new MachineConfigPool for the new role

Since we created a new role, we need to create a new MachineConfigPool in order to allow upgrades in those nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: secure-worker
spec:
machineConfigSelector:
matchExpressions:
- {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,secure-worker]}
nodeSelector:
matchLabels:
node-role.kubernetes.io/secure-worker: ""
paused: false

Step 4. Create a new IngressController

Create a new IngressController including a namespaceSelector in order to be sure that we can choose what Ingress to use.

apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: secure-ingress
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
type: HostNetwork
domain: secureapps.ocp.136.243.40.222.nip.io
replicas: 2
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/secure-worker: ""
ingressaccess: "true"
namespaceSelector:
matchExpressions:
- key: securityzone
operator: In
values:
- secure

Since we want to not use the default IngressController if we are using the Secure IngressController, we configure the opposite selector in the default IngressController

oc edit ingresscontrollers.operator.openshift.io -n openshift-ingress-operator default...
spec:
namespaceSelector:
matchExpressions:
- key: securityzone
operator: DoesNotExist

...

Lastly, since the new IngressControllers will be running on the SecureZone workers, you must include the Tolaration in the openshift-ingress namespace

oc edit namespace openshift-ingress...
annotations:
openshift.io/node-selector: ''
openshift.io/sa.scc.mcs: 's0:c24,c4'
openshift.io/sa.scc.supplemental-groups: 1000560000/10000
openshift.io/sa.scc.uid-range: 1000560000/10000
scheduler.alpha.kubernetes.io/defaultTolerations: '[{"Key": "securityzone", "Operator":"Equal", "Value": "secure", "effect": "NoSchedule"}]'
managedFields:
...

Step 5. Include a label in the default namespace to permit IngressController traffic

Since our new IngressController is using HostNetwork access, we have to include this label in the default namespace in order to configure the required NetworkPolicies

oc edit namespace defaultapiVersion: v1
kind: Namespace
metadata:
labels:
network.openshift.io/policy-group: ingress

annotations:
...

Step 6. Remove self-provisioning projects

Don’t let the users create their own projects/namespaces:

$ oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'

Remove automatic updates that could reset the cluster roles to the default state:

$ oc patch clusterrolebinding.rbac self-provisioners -p ‘{ “metadata”: { “annotations”: { “rbac.authorization.kubernetes.io/autoupdate”: “false” } } }’

Step 7. Modify the project creation template

Generate a project creation template:

$ oc adm create-bootstrap-project-template -o yaml > projecttemplate.yaml

Include the default NetworkPolicy objects and an annotation in the project in order to restrict the Tolerations that users can configure:

apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
podSelector: {}
policyTypes:
- Ingress
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/not-ready"},{"operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300, "key": "node.kubernetes.io/unreachable"}]'
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
...

Once you have ready the template create it in the openshift-config namespace:

$ oc create -f projecttemplate.yaml -n openshift-config

Then you need to add a reference to this template in the cluster project config object:

oc edit project.config.openshift.io/cluster

Check the changes in bold text:

...
spec:
projectRequestTemplate:
name: project-request

Step 8. Create groups of users

I created two groups, one per Zone (but you can create as many as you want)

oc adm groups new <zone1-users> <user1> <user2> ... <usern>oc adm groups new <zone2-users> <usera> <userb> ... <userx>

Step 9. Create a new role that restricts NetworkPolicy management

Create a new role that has no access to the NetworkPolicy management. I created one based on the default “edit” role

oc get clusterrole edit -o yaml > editrestricted.yaml

In that file

  • Change the name of the object.
  • Remove all metadata less the name
  • Remove all occurrences where “networkpolicies” appears.

After all those modifications you can create the object, in this case, I will use the OC CLI:

oc create -f editrestricted.yaml

Step 10. Create namespaces and assign users and roles

Regular Zone projects can be created with this command with no further configuration:

oc new-project <project name>

For projects that are allowed in the Secure Zone there are some additional steps to perform on the namespace object:

  • Add a label to select the IngressController (securityzone= secure in the example)
  • Remove the tolerationsWhitelist annotation
  • (optional) Include a defaultTolerations annotation so users don’t need to include the Toleration configuration in their Deployments if they don’t want
  • (optional) Include a default nodeSelector in the annotations so users don’t need to include it in their Deployment object. If you configure the default nodeSelector the namespace will only the group of the workers with that label, since all deployments will include such nodeSelector. If you want to let the namespace create applications in both Zones do not include it (for others use cases such as the “untrusted user zone” will be a requirement).

You can see here an example:

apiVersion: v1
kind: Namespace
metadata:
labels:
securityzone: secure

annotations:
openshift.io/description: ""
openshift.io/display-name: ""
openshift.io/requester: system:admin
openshift.io/sa.scc.mcs: s0:c27,c9
openshift.io/sa.scc.supplemental-groups: 1000720000/10000
openshift.io/sa.scc.uid-range: 1000720000/10000
openshift.io/node-selector: node-role.kubernetes.io/secure-worker=
scheduler.alpha.kubernetes.io/defaultTolerations: '[{"Key": "securityzone", "Operator":"Equal", "Value": "secure", "effect": "NoSchedule"}]'
creationTimestamp: "2020-07-21T07:18:06Z"
managedFields:
...

Give access to users (or the user group) by adding the role that we already created attached to the namespace.

oc adm policy add-role-to-group <new role created> <group> -n <namespace>

Step 11. Create additional networks for Multus

Include in the cluster network operator object the additionalNetwork key

oc edit networks.operator.openshift.io cluster

In that additionalNetwork key you have to specify:

  • The namespace that you allow to use that VLAN
  • The plugin type
  • Master interface
  • IPAM configuration

For example:

...
spec:
additionalNetworks:
- name: test-vlan
namespace: test-securezone
rawCNIConfig: '{ "cniVersion": "0.3.1", "type": "macvlan", "capabilities": { "ips": true }, "master": "ens4", "mode": "bridge", "ipam": { "type": "static" } }'
type: Raw

clusterNetwork:
...

Step 12. To infinity and beyond

Add whatever configuration that you think makes sense for you, for example, you could have different StorageClass, in our case we could add a dedicated StorageClass for our Secure Zone and then restrict its usage only for PVs for the applications in that Zone. The sky is the limit.

Just in case that you want to start over, or review any of the steps performed to configure the Secure Zone:

--

--

I was born some time ago, I’m living daily and, probably, I will eventually die