Tutorial: Setting Up a Net Peering With OKS CLI (Beta)

This feature is currently in beta. For more information about beta services, see the Glossary and our General Terms and Conditions of Sale.

In this tutorial, you will learn how to set up a Net peering, either between one of your Nets and one of your projects, or between two of your projects.

You can peer a Net peering to enable secure communication between clusters and external resources located across different accounts and Nets.

When you create a project, OKS automatically creates a Net inside it. Net peering always happens between Nets and not between projects themselves. This means that whether you peer a Net with an OKS project or two projects together, what you are actually doing is peering their underlying Nets. For the sake of clarity, this tutorial uses "project" to mean the project’s Net.

Although the peering itself is established at Net-level, the peering request must actually be initiated or accepted from a cluster within the project, using custom resources. Once the project is peered, all clusters within it then automatically gain access to the Net.

You can either set up the peering:

  • From the cluster to the Net, using a peering request custom resource.

  • From the Net to the cluster, using a peering acceptance custom resource.

  • From the cluster of a project to the cluster of another project, using both a peering request custom resource and a peering acceptance custom resource.

It is not possible to create a Net peering between a cluster and Net having overlapping CIDR blocks. The CIDR block used when creating the Net must be different from that of the project containing the cluster from which you want to initiate the peering. The same goes when peering two projects together. For more information, see Getting Information About a Specific Project.

It is possible to peer your cluster to a Net from a different account, the logic of this tutorial stays the same.

Setting Up a Net Peering From Your Cluster to Your Net

Creating the Net Peering

Before you begin:

  1. Create the custom resource for the peering request. This resource is a YAML file in which you must add the following pieces of information:

    • accepterNetId: the ID of the Net you created.

    • accepterOwnerId: the ID of your OUTSCALE account.

    peering-request.yaml
    apiVersion: oks.dev/v1beta
    kind: NetPeeringRequest
    metadata:
      name: peering-request
    spec:
      accepterNetId: ID_OF_YOUR_NET
      accepterOwnerId: "ID_YOUR_OUTSCALE_ACCOUNT"
  2. Apply the peering request file to your cluster using the following command:

    Request sample
    $ oks-cli cluster --project-name "NAME_OF_PROJECT" --cluster-name NAME_OF_CLUSTER kubectl apply -f ~/PATH/peering-request.yaml
    Result sample
    netpeeringrequest.oks.dev/peering-request created
  3. Run the following command to obtain the Net peering ID needed to accept the peering request:

    Request sample
    $ oks-cli cluster --project-name "NAME_OF_PROJECT" --cluster-name NAME_OF_CLUSTER kubectl get npr
    Result sample
    NAME              SOURCE NET ID   ACCEPTER NET ID   NET PEERING ID   STATE NAME           STATE MESSAGE                        EXPIRATION DATE
    peering-request   ID_OF_YOUR_CLUSTER    ID_OF_YOUR_NET     ID_OF_NET_PEERING     pending-acceptance   Pending acceptance by ID_YOUR_OUTSCALE_ACCOUNT
  4. Accept the peering request on the Net side, using OSC CLI:

    Request sample
    $ oapi-cli AcceptNetPeering --NetPeeringId "ID_OF_NET_PEERING"
  5. You can check that the peering has been set up correctly with the following command:

    Request sample
    $ oks-cli cluster --project-name "project-1" --cluster-name cluster-1 kubectl get npeer
    Result sample
    NAME           SOURCE NET ID   ACCEPTER NET ID   NET PEERING ID   STATE NAME   STATE MESSAGE   EXPIRATION DATE
    ID_OF_NET_PEERING   ID_OF_YOUR_CLUSTER    ID_OF_YOUR_NET     ID_OF_NET_PEERING     active       Active          2025-08-04T11:28:46.000Z

    You can also check that the peering has been set up correctly using Cockpit and OSC CLI. For more information, see Getting Information About Your Net Peerings.

    A Net peering never expires. You can disregard the expiration date appearing in the result of the OKS CLI command.

Managing Traffic

Once the peering is set up, traffic must be routed and allowed between both sides of the connection:

If all peered accounts have access to OKS, no further action is required. OKS will automatically handle traffic.

If any peered account does not have access to OKS, the user of that account must manually configure routing:

  1. Create a route to allow traffic to reach the peered project.

    The route must:

    • Use the peered project’s CIDR block as the destination.

    • Use the ID of the Net peering as the target.

    For more information, see Tutorial: Setting Up a Net Peering and Creating a Route.

  2. Configure security group rules for the VMs in your Net.

    Add one or more rules allowing traffic to flow between both sides of the peering:

    • For inbound flow: allow traffic from one or more security groups of the peer Net.

    • For outbound flow: allow traffic to one or more security groups of the peer Net.

    For more information, see Adding Rules to a Security Group.

Setting Up a Net Peering From Your Net to Your Cluster

Creating the Net Peering

Before you begin:

  • Create a Net. This action can be done using Cockpit or OSC CLI. For more information, see Creating a Net.

  • Create a cluster to be peered with your Net. For more information, see Creating a Cluster.

  • Create a node pool and get information about that node pool. For more information see Listing Your Node Pools > Listing Node Pools Using kubectl. This enable you to gather the following pieces of information regarding the cluster you created:

    • the ID of your OKS account (oks.account-id),

    • the ID of your OKS network (oks.network_id).

  1. Create a Net peering with SourceNetId the ID of the Net you previously created, AccepterNetId the ID of your OKS network, which represents the ID of your cluster, and AccepterOwnerId the ID of your OKS account. For more information, see Creating a Net Peering:

    Request sample
    $ oapi-cli CreateNetPeering --SourceNetId "ID_OF_YOUR_NET" --AccepterNetId "ID_OF_YOUR_CLUSTER" --AccepterOwnerId '"ID_OF_OKS_ACCOUNT"'
    Result sample
    {
        "ResponseContext": {
            "RequestId": "b99e373a-4abe-4249-bd37-08dbfe9caa1c"
        },
        "NetPeering": {
            "Tags": [],
            "State": {
                "Name": "pending-acceptance",
                "Message": "Pending acceptance by ID_OF_OKS_ACCOUNT"
            },
            "AccepterNet": {
                "NetId": "ID_OF_YOUR_CLUSTER",
                "IpRange": "10.50.0.0/16",
                "AccountId": "ID_OF_OKS_ACCOUNT"
            },
            "SourceNet": {
                "NetId": "ID_OF_YOUR_NET",
                "IpRange": "192.168.0.0/16",
                "AccountId": "ID_YOUR_OUTSCALE_ACCOUNT"
            },
            "NetPeeringId": "ID_OF_NET_PEERING"
        }
    }
  2. Create the custom resource for the peering acceptance. This resource is a YAML file in which you must add the ID of the previously obtained Net peering:

    peering-acceptance.yaml
    apiVersion: oks.dev/v1beta
    kind: NetPeeringAcceptance
    metadata:
      name: peering-acceptance
    spec:
      netPeeringId: ID_OF_NET_PEERING
  3. Apply the peering acceptance file to your cluster:

    Request sample
    $ oks-cli cluster --project-name "project-1" --cluster-name cluster-1 kubectl apply -f ~/PATH/peering-acceptance.yaml
    Result sample
    netpeeringacceptance.oks.dev/peering-acceptance created
  4. You can check that the peering has been set up correctly with the following command:

    Request sample
    $ oks-cli cluster --project-name "projet1" --cluster-name cluster1 kubectl get npeer
    Result sample
    NAME           SOURCE NET ID   ACCEPTER NET ID   NET PEERING ID   STATE NAME   STATE MESSAGE   EXPIRATION DATE
    ID_OF_NET_PEERING   ID_OF_YOUR_NET    ID_OF_YOUR_CLUSTER     ID_OF_NET_PEERING     active       Active          2025-08-04T11:28:46.000Z

    You can also check that the peering has been set up correctly using Cockpit. For more information, see Getting Information About Your Net Peerings.

    A Net peering never expires. You can disregard the expiration date appearing in the result of the OKS CLI command.

Managing Traffic

Once the peering is set up, traffic must be routed and allowed between both sides of the connection:

If all peered accounts have access to OKS, no further action is required and OKS will automatically handle traffic.

If any peered account does not have access to OKS, the user of that account must manually configure routing and security:

  1. Create a route to allow traffic to reach the peered project:

    The route must:

    • Use the peered project’s CIDR block as the destination.

    • Use the ID of the Net peering as the target.

    For more information, see Tutorial: Setting Up a Net Peering and Creating a Route.

  2. Configure security group rules for the VMs in your Net.

    Add one or more rules allowing traffic to flow between both sides of the peering:

    • For inbound traffic: Allow traffic from one or more security groups of the peer Net.

    • For outbound traffic: Allow traffic to one or more security groups of the peer Net.

    For more information, see Adding Rules to a Security Group.

Setting Up a Net Peering From One Cluster to Another

Creating the Net Peering

Before you begin:

  • Create two projects with different CIDR blocks. For more information, see Creating a Project.

  • Create two clusters to be peered together, one in each of the previously created projects. For more information, see Creating a Cluster. For clarity’s sake, these cluster will be called "source cluster" and "target cluster" in this tutorial.

  • Create a node pool in the target cluster. Fore more information, see Creating a Node Pool.

  1. Gather the following pieces of information:

    • The ID of the target project (oks.network_id). This is done by getting information about the node pool created in the target cluster. For convenience, you can cherry-pick the required information by using the egrep command:

      Request sample
      $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl describe nodepool target-nodepool | egrep "oks.network_id"
      Result sample
      oks.network_id=NET_ID_OF_TARGET_PROJECT
    • The ID of the account that created the target project, in this case your account. This ID can be obtained using the following command:

      Request sample
      $ oks-cli project quotas
      Result sample
      {
          "quotas": [
              {
                  ...
                  "AccountId": "ID_OF_YOUR_ACCOUNT",
                  ...
              }
          ]
      ...
      }
  2. Create the YAML custom resource for the peering request, replacing the placeholder values with your own:

    • accepterNetId: the ID of the target project.

    • accepterOwnerId: the ID of your account.

    peering-request.yaml
    apiVersion: oks.dev/v1beta
    kind: NetPeeringRequest
    metadata:
      name: peering-request
    spec:
      accepterNetId: NET_ID_OF_TARGET_PROJECT
      accepterOwnerId: "ID_OF_YOUR_ACCOUNT"
  3. Apply the peering request to the source cluster:

    Request sample
    $ oks-cli cluster --project-name "source-project" --cluster-name source-cluster kubectl apply -f ~/PATH/peering-request.yaml
    Result sample
    netpeeringrequest.oks.dev/peering-request created
  4. Run the following command to obtain the Net peering ID needed to accept the peering request:

    Request sample
    $ oks-cli cluster --project-name "source-project" --cluster-name source-cluster kubectl get npr
    Result sample
    NAME              SOURCE NET ID   ACCEPTER NET ID   NET PEERING ID   STATE NAME           STATE MESSAGE                        EXPIRATION DATE
    peering-request   NET_ID_OF_SOURCE_PROJECT    NET_ID_OF_TARGET_PROJECT    ID_OF_NET_PEERING     pending-acceptance   Pending acceptance by ID_YOUR_ACCOUNT
  5. Create the YAML custom resource for the peering acceptance, replacing the placeholder values with your own:

    peering-acceptance.yaml
    apiVersion: oks.dev/v1beta
    kind: NetPeeringAcceptance
    metadata:
      name: peering-acceptance
    spec:
      netPeeringId: ID_OF_NET_PEERING
  6. Apply the peering acceptance file to the target cluster:

    Request sample
    $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl apply -f ~/PATH/peering-acceptance.yaml
    Result sample
    netpeeringacceptance.oks.dev/peering-acceptance created
  7. You can check that the peering has been set up correctly with the following command:

    Request sample
    $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl get npeer
    Result sample
    NAME           SOURCE NET ID   ACCEPTER NET ID   NET PEERING ID   STATE NAME   STATE MESSAGE   EXPIRATION DATE
    ID_OF_NET_PEERING   NET_ID_OF_SourCE_PROJECT   NET_ID_OF_TARGET_PROJECT   ID_OF_NET_PEERING     active       Active          2025-08-04T11:28:46.000Z

    A Net peering never expires. You can disregard the expiration date appearing in the result of the OKS CLI command.

Routing is handled automatically by OKS, therefore there is no need to create any route once the Net peering is set up.

Testing Connectivity

To access applications running in OKS, you need to expose them using a service (either a NodePort or load balancer). These services will then use your newly created Net peering. The following steps show you how to create a load balancer service to test the connectivity of your Net peering:

Before you begin:

Make sure you meet the following prerequisites:

  • At least one node pool in each cluster. For more information, see Creating a Node Pool.

  • Source and target workloads on each side.

  1. Create the following deployment and apply the file to the target cluster:

    ecoheaders-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: echoheaders
      namespace: default
      labels:
        app: echoheaders
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: echoheaders
      template:
        metadata:
          labels:
            app: echoheaders
        spec:
          containers:
          - name: echoheaders
            image: gcr.io/google_containers/echoserver:1.10
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
    Request sample
    $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl apply -f ~/PATH/ecoheaders-deployment.yaml
  2. Create a LoadBalancer type service with an internal load balancer and apply it to the target cluster:

    ecoheaders-loadbalancer.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: echoheaders-lb-internal
      namespace: default
      labels:
        app: echoheaders
      annotations:
        service.beta.kubernetes.io/osc-load-balancer-name: "internal-lb-test"
        service.beta.kubernetes.io/osc-load-balancer-scheme: "internal"
    spec:
      ports:
      - port: 80
        name: http
        protocol: TCP
        targetPort: 8080
      selector:
        app: echoheaders
      type: LoadBalancer
    Request sample
    $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl apply -f ~/PATH/ecoheaders-loadbalancer.yaml
  3. Retrieve the DNS name of the load balancer:

    Request sample
    $ oks-cli cluster --project-name "target-project" --cluster-name target-cluster kubectl get svc
    Result sample
    NAME                                   TYPE           CLUSTER-IP      EXTERNAL-IP                                            PORT(S)        AGE
    echoheaders-lb-internal                LoadBalancer   10.92.72.99     internal-internal-lb-test.eu-west-2.lbu.outscale.com   80:31002/TCP   8s
  4. Check connectivity using cURL:

    Request sample
    $ host internal-internal-lb-test.eu-west-2.lbu.outscale.com
    Result sample
    internal-internal-lb-test.eu-west-2.lbu.outscale.com has address 10.50.51.22
  5. From the source cluster, run the following command (using the correct DNS name) to test the service and confirm that the clusters can communicate properly:

    Request sample
    $ oks-cli cluster --project-name "source-project" --cluster-name source-cluster kubectl run curl-pod --image=curlimages/curl --restart=Never --rm -it -- curl http://internal-internal-lb-test.eu-west-2.lbu.outscale.com
    Result sample
    Hostname: echoheaders-b6b6b99c8-kfv5h
    
    Pod Information:
    	-no pod information available-
    
    Server values:
    	server_version=nginx: 1.13.3 - lua: 10008
    
    Request Information:
    	client_address=10.91.3.142
    	method=GET
    	real path=/
    	query=
    	request_version=1.1
    	request_scheme=http
    	request_uri=http://internal-internal-lb-test.eu-west-2.lbu.outscale.com:8080/
    
    Request Headers:
    	accept=*/*
    	host=internal-internal-lb-test.eu-west-2.lbu.outscale.com
    	user-agent=curl/8.16.0
    
    Request Body:
    	-no body in request-
    
    pod "curl-pod" deleted

Deleting the Net Peering

You can delete the Net peering using the kubectl delete npeer command, followed by the ID of the Net peering:

Request sample
$ oks-cli cluster --project-name "projet1" --cluster-name cluster1 kubectl delete npeer ID_OF_NET_PEERING
Result sample
netpeering.oks.dev "ID_OF_NET_PEERING" deleted

Related Pages